00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 115 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3293 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.128 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.815 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.827 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.839 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.840 > git config core.sparsecheckout # timeout=10 00:00:05.853 > git read-tree -mu HEAD # timeout=10 00:00:05.869 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.906 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.907 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.991 [Pipeline] Start of Pipeline 00:00:06.002 [Pipeline] library 00:00:06.004 Loading library shm_lib@master 00:00:06.004 Library shm_lib@master is cached. Copying from home. 00:00:06.017 [Pipeline] node 00:00:06.023 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:06.028 [Pipeline] { 00:00:06.039 [Pipeline] catchError 00:00:06.040 [Pipeline] { 00:00:06.052 [Pipeline] wrap 00:00:06.060 [Pipeline] { 00:00:06.068 [Pipeline] stage 00:00:06.070 [Pipeline] { (Prologue) 00:00:06.085 [Pipeline] echo 00:00:06.086 Node: VM-host-SM17 00:00:06.091 [Pipeline] cleanWs 00:00:06.098 [WS-CLEANUP] Deleting project workspace... 00:00:06.098 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.104 [WS-CLEANUP] done 00:00:06.279 [Pipeline] setCustomBuildProperty 00:00:06.360 [Pipeline] httpRequest 00:00:06.379 [Pipeline] echo 00:00:06.381 Sorcerer 10.211.164.101 is alive 00:00:06.388 [Pipeline] httpRequest 00:00:06.392 HttpMethod: GET 00:00:06.392 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.393 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.406 Response Code: HTTP/1.1 200 OK 00:00:06.407 Success: Status code 200 is in the accepted range: 200,404 00:00:06.408 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.093 [Pipeline] sh 00:00:10.374 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.392 [Pipeline] httpRequest 00:00:10.413 [Pipeline] echo 00:00:10.414 Sorcerer 10.211.164.101 is alive 00:00:10.422 [Pipeline] httpRequest 00:00:10.427 HttpMethod: GET 00:00:10.427 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:10.428 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:10.442 Response Code: HTTP/1.1 200 OK 00:00:10.442 Success: Status code 200 is in the accepted range: 200,404 00:00:10.443 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:44.989 [Pipeline] sh 00:00:45.270 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:48.598 [Pipeline] sh 00:00:48.879 + git -C spdk log --oneline -n5 00:00:48.879 241d0f3c9 test: fix dpdk builds on ubuntu24 00:00:48.879 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:00:48.879 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:48.879 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:48.879 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:48.896 [Pipeline] withCredentials 00:00:48.906 > git --version # timeout=10 00:00:48.919 > git --version # 'git version 2.39.2' 00:00:48.934 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:48.936 [Pipeline] { 00:00:48.946 [Pipeline] retry 00:00:48.948 [Pipeline] { 00:00:48.965 [Pipeline] sh 00:00:49.245 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:49.258 [Pipeline] } 00:00:49.278 [Pipeline] // retry 00:00:49.282 [Pipeline] } 00:00:49.301 [Pipeline] // withCredentials 00:00:49.309 [Pipeline] httpRequest 00:00:49.327 [Pipeline] echo 00:00:49.329 Sorcerer 10.211.164.101 is alive 00:00:49.336 [Pipeline] httpRequest 00:00:49.341 HttpMethod: GET 00:00:49.341 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:49.342 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:49.343 Response Code: HTTP/1.1 200 OK 00:00:49.343 Success: Status code 200 is in the accepted range: 200,404 00:00:49.344 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:55.286 [Pipeline] sh 00:00:55.582 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.492 [Pipeline] sh 00:00:57.771 + git -C dpdk log --oneline -n5 00:00:57.771 caf0f5d395 version: 22.11.4 00:00:57.771 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:57.771 dc9c799c7d vhost: fix missing spinlock unlock 00:00:57.771 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:57.771 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:57.892 [Pipeline] writeFile 00:00:57.903 [Pipeline] sh 00:00:58.208 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.280 [Pipeline] sh 00:00:58.557 + cat autorun-spdk.conf 00:00:58.557 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.557 SPDK_TEST_NVMF=1 00:00:58.557 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.557 SPDK_TEST_URING=1 00:00:58.557 SPDK_TEST_USDT=1 00:00:58.557 SPDK_RUN_UBSAN=1 00:00:58.557 NET_TYPE=virt 00:00:58.557 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:58.557 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:58.557 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.563 RUN_NIGHTLY=1 00:00:58.565 [Pipeline] } 00:00:58.582 [Pipeline] // stage 00:00:58.597 [Pipeline] stage 00:00:58.599 [Pipeline] { (Run VM) 00:00:58.613 [Pipeline] sh 00:00:58.891 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.891 + echo 'Start stage prepare_nvme.sh' 00:00:58.891 Start stage prepare_nvme.sh 00:00:58.891 + [[ -n 7 ]] 00:00:58.891 + disk_prefix=ex7 00:00:58.891 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 ]] 00:00:58.891 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf ]] 00:00:58.891 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf 00:00:58.891 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.891 ++ SPDK_TEST_NVMF=1 00:00:58.891 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.891 ++ SPDK_TEST_URING=1 00:00:58.891 ++ SPDK_TEST_USDT=1 00:00:58.891 ++ SPDK_RUN_UBSAN=1 00:00:58.891 ++ NET_TYPE=virt 00:00:58.891 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:58.891 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:58.891 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.891 ++ RUN_NIGHTLY=1 00:00:58.891 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:58.891 + nvme_files=() 00:00:58.891 + declare -A nvme_files 00:00:58.891 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.891 + nvme_files['nvme.img']=5G 00:00:58.891 + nvme_files['nvme-cmb.img']=5G 00:00:58.891 + nvme_files['nvme-multi0.img']=4G 00:00:58.891 + nvme_files['nvme-multi1.img']=4G 00:00:58.891 + nvme_files['nvme-multi2.img']=4G 00:00:58.891 + nvme_files['nvme-openstack.img']=8G 00:00:58.891 + nvme_files['nvme-zns.img']=5G 00:00:58.891 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.891 + (( SPDK_TEST_FTL == 1 )) 00:00:58.891 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.891 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.891 + for nvme in "${!nvme_files[@]}" 00:00:58.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:58.891 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.891 + for nvme in "${!nvme_files[@]}" 00:00:58.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:58.891 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.891 + for nvme in "${!nvme_files[@]}" 00:00:58.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:58.891 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.891 + for nvme in "${!nvme_files[@]}" 00:00:58.891 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:58.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.892 + for nvme in "${!nvme_files[@]}" 00:00:58.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:58.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.892 + for nvme in "${!nvme_files[@]}" 00:00:58.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:58.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.892 + for nvme in "${!nvme_files[@]}" 00:00:58.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:58.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.892 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:58.892 + echo 'End stage prepare_nvme.sh' 00:00:58.892 End stage prepare_nvme.sh 00:00:58.904 [Pipeline] sh 00:00:59.185 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.185 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:59.185 00:00:59.185 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant 00:00:59.185 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk 00:00:59.185 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:59.185 HELP=0 00:00:59.185 DRY_RUN=0 00:00:59.185 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:59.185 NVME_DISKS_TYPE=nvme,nvme, 00:00:59.185 NVME_AUTO_CREATE=0 00:00:59.185 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:59.185 NVME_CMB=,, 00:00:59.185 NVME_PMR=,, 00:00:59.185 NVME_ZNS=,, 00:00:59.185 NVME_MS=,, 00:00:59.185 NVME_FDP=,, 00:00:59.185 SPDK_VAGRANT_DISTRO=fedora38 00:00:59.185 SPDK_VAGRANT_VMCPU=10 00:00:59.185 SPDK_VAGRANT_VMRAM=12288 00:00:59.185 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.185 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:59.185 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.185 SPDK_OPENSTACK_NETWORK=0 00:00:59.185 VAGRANT_PACKAGE_BOX=0 00:00:59.185 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:00:59.185 FORCE_DISTRO=true 00:00:59.185 VAGRANT_BOX_VERSION= 00:00:59.185 EXTRA_VAGRANTFILES= 00:00:59.185 NIC_MODEL=e1000 00:00:59.185 00:00:59.185 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt' 00:00:59.185 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:01:02.465 Bringing machine 'default' up with 'libvirt' provider... 00:01:03.032 ==> default: Creating image (snapshot of base box volume). 00:01:03.032 ==> default: Creating domain with the following settings... 00:01:03.032 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721857268_c49cc5162544c4f80246 00:01:03.032 ==> default: -- Domain type: kvm 00:01:03.032 ==> default: -- Cpus: 10 00:01:03.032 ==> default: -- Feature: acpi 00:01:03.032 ==> default: -- Feature: apic 00:01:03.032 ==> default: -- Feature: pae 00:01:03.032 ==> default: -- Memory: 12288M 00:01:03.032 ==> default: -- Memory Backing: hugepages: 00:01:03.032 ==> default: -- Management MAC: 00:01:03.032 ==> default: -- Loader: 00:01:03.032 ==> default: -- Nvram: 00:01:03.032 ==> default: -- Base box: spdk/fedora38 00:01:03.032 ==> default: -- Storage pool: default 00:01:03.032 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721857268_c49cc5162544c4f80246.img (20G) 00:01:03.032 ==> default: -- Volume Cache: default 00:01:03.032 ==> default: -- Kernel: 00:01:03.032 ==> default: -- Initrd: 00:01:03.032 ==> default: -- Graphics Type: vnc 00:01:03.032 ==> default: -- Graphics Port: -1 00:01:03.032 ==> default: -- Graphics IP: 127.0.0.1 00:01:03.032 ==> default: -- Graphics Password: Not defined 00:01:03.032 ==> default: -- Video Type: cirrus 00:01:03.032 ==> default: -- Video VRAM: 9216 00:01:03.032 ==> default: -- Sound Type: 00:01:03.032 ==> default: -- Keymap: en-us 00:01:03.032 ==> default: -- TPM Path: 00:01:03.032 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:03.032 ==> default: -- Command line args: 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:03.032 ==> default: -> value=-drive, 00:01:03.032 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:03.032 ==> default: -> value=-drive, 00:01:03.032 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.032 ==> default: -> value=-drive, 00:01:03.032 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.032 ==> default: -> value=-drive, 00:01:03.032 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:03.032 ==> default: -> value=-device, 00:01:03.032 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.290 ==> default: Creating shared folders metadata... 00:01:03.290 ==> default: Starting domain. 00:01:04.673 ==> default: Waiting for domain to get an IP address... 00:01:22.822 ==> default: Waiting for SSH to become available... 00:01:22.822 ==> default: Configuring and enabling network interfaces... 00:01:25.352 default: SSH address: 192.168.121.205:22 00:01:25.352 default: SSH username: vagrant 00:01:25.352 default: SSH auth method: private key 00:01:27.253 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.374 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:40.642 ==> default: Mounting SSHFS shared folder... 00:01:41.639 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.639 ==> default: Checking Mount.. 00:01:43.099 ==> default: Folder Successfully Mounted! 00:01:43.100 ==> default: Running provisioner: file... 00:01:43.667 default: ~/.gitconfig => .gitconfig 00:01:44.234 00:01:44.234 SUCCESS! 00:01:44.234 00:01:44.234 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt and type "vagrant ssh" to use. 00:01:44.234 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.234 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt" to destroy all trace of vm. 00:01:44.234 00:01:44.243 [Pipeline] } 00:01:44.263 [Pipeline] // stage 00:01:44.273 [Pipeline] dir 00:01:44.274 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora38-libvirt 00:01:44.275 [Pipeline] { 00:01:44.290 [Pipeline] catchError 00:01:44.292 [Pipeline] { 00:01:44.307 [Pipeline] sh 00:01:44.586 + vagrant ssh-config --host vagrant 00:01:44.586 + sed -ne /^Host/,$p 00:01:44.586 + tee ssh_conf 00:01:48.774 Host vagrant 00:01:48.774 HostName 192.168.121.205 00:01:48.774 User vagrant 00:01:48.774 Port 22 00:01:48.774 UserKnownHostsFile /dev/null 00:01:48.774 StrictHostKeyChecking no 00:01:48.774 PasswordAuthentication no 00:01:48.774 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:48.774 IdentitiesOnly yes 00:01:48.774 LogLevel FATAL 00:01:48.774 ForwardAgent yes 00:01:48.774 ForwardX11 yes 00:01:48.774 00:01:48.787 [Pipeline] withEnv 00:01:48.789 [Pipeline] { 00:01:48.803 [Pipeline] sh 00:01:49.080 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:49.080 source /etc/os-release 00:01:49.080 [[ -e /image.version ]] && img=$(< /image.version) 00:01:49.080 # Minimal, systemd-like check. 00:01:49.080 if [[ -e /.dockerenv ]]; then 00:01:49.080 # Clear garbage from the node's name: 00:01:49.080 # agt-er_autotest_547-896 -> autotest_547-896 00:01:49.080 # $HOSTNAME is the actual container id 00:01:49.080 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:49.080 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:49.080 # We can assume this is a mount from a host where container is running, 00:01:49.080 # so fetch its hostname to easily identify the target swarm worker. 00:01:49.080 container="$(< /etc/hostname) ($agent)" 00:01:49.080 else 00:01:49.080 # Fallback 00:01:49.080 container=$agent 00:01:49.080 fi 00:01:49.080 fi 00:01:49.080 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:49.080 00:01:49.091 [Pipeline] } 00:01:49.111 [Pipeline] // withEnv 00:01:49.119 [Pipeline] setCustomBuildProperty 00:01:49.133 [Pipeline] stage 00:01:49.135 [Pipeline] { (Tests) 00:01:49.152 [Pipeline] sh 00:01:49.431 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.702 [Pipeline] sh 00:01:49.980 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.995 [Pipeline] timeout 00:01:49.996 Timeout set to expire in 30 min 00:01:49.998 [Pipeline] { 00:01:50.014 [Pipeline] sh 00:01:50.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.860 HEAD is now at 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:50.873 [Pipeline] sh 00:01:51.154 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.428 [Pipeline] sh 00:01:51.708 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.982 [Pipeline] sh 00:01:52.276 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:52.535 ++ readlink -f spdk_repo 00:01:52.535 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.535 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.535 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.535 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.535 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.535 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.535 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.535 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:52.535 + cd /home/vagrant/spdk_repo 00:01:52.535 + source /etc/os-release 00:01:52.535 ++ NAME='Fedora Linux' 00:01:52.535 ++ VERSION='38 (Cloud Edition)' 00:01:52.535 ++ ID=fedora 00:01:52.535 ++ VERSION_ID=38 00:01:52.535 ++ VERSION_CODENAME= 00:01:52.535 ++ PLATFORM_ID=platform:f38 00:01:52.535 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:52.535 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.535 ++ LOGO=fedora-logo-icon 00:01:52.535 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:52.535 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.535 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:52.535 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.535 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.535 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.535 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:52.535 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.535 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:52.535 ++ SUPPORT_END=2024-05-14 00:01:52.535 ++ VARIANT='Cloud Edition' 00:01:52.535 ++ VARIANT_ID=cloud 00:01:52.535 + uname -a 00:01:52.535 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:52.535 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:53.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:53.104 Hugepages 00:01:53.104 node hugesize free / total 00:01:53.104 node0 1048576kB 0 / 0 00:01:53.104 node0 2048kB 0 / 0 00:01:53.104 00:01:53.104 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.104 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:53.104 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:53.104 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:53.104 + rm -f /tmp/spdk-ld-path 00:01:53.104 + source autorun-spdk.conf 00:01:53.104 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.104 ++ SPDK_TEST_NVMF=1 00:01:53.104 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.104 ++ SPDK_TEST_URING=1 00:01:53.104 ++ SPDK_TEST_USDT=1 00:01:53.104 ++ SPDK_RUN_UBSAN=1 00:01:53.104 ++ NET_TYPE=virt 00:01:53.104 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:53.104 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:53.104 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.104 ++ RUN_NIGHTLY=1 00:01:53.104 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.104 + [[ -n '' ]] 00:01:53.104 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:53.104 + for M in /var/spdk/build-*-manifest.txt 00:01:53.104 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.104 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.104 + for M in /var/spdk/build-*-manifest.txt 00:01:53.104 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.104 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.104 ++ uname 00:01:53.104 + [[ Linux == \L\i\n\u\x ]] 00:01:53.104 + sudo dmesg -T 00:01:53.104 + sudo dmesg --clear 00:01:53.104 + dmesg_pid=5840 00:01:53.104 + sudo dmesg -Tw 00:01:53.104 + [[ Fedora Linux == FreeBSD ]] 00:01:53.104 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.104 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.104 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.104 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.104 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.104 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.104 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.104 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.104 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.104 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.104 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.104 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.104 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.104 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.104 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.104 Test configuration: 00:01:53.104 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.104 SPDK_TEST_NVMF=1 00:01:53.104 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.104 SPDK_TEST_URING=1 00:01:53.104 SPDK_TEST_USDT=1 00:01:53.104 SPDK_RUN_UBSAN=1 00:01:53.104 NET_TYPE=virt 00:01:53.104 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:53.104 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:53.104 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.104 RUN_NIGHTLY=1 21:41:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.104 21:41:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.104 21:41:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.104 21:41:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.104 21:41:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.104 21:41:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.104 21:41:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.104 21:41:58 -- paths/export.sh@5 -- $ export PATH 00:01:53.104 21:41:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.104 21:41:58 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.104 21:41:58 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:53.104 21:41:58 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721857318.XXXXXX 00:01:53.104 21:41:58 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721857318.nMkonS 00:01:53.104 21:41:58 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:53.104 21:41:58 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:53.104 21:41:58 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:53.363 21:41:58 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:53.363 21:41:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.363 21:41:58 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.363 21:41:58 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:53.363 21:41:58 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:53.363 21:41:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.363 21:41:58 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:53.363 21:41:58 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:01:53.363 21:41:58 -- pm/common@17 -- $ local monitor 00:01:53.363 21:41:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.363 21:41:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.363 21:41:58 -- pm/common@25 -- $ sleep 1 00:01:53.363 21:41:58 -- pm/common@21 -- $ date +%s 00:01:53.363 21:41:58 -- pm/common@21 -- $ date +%s 00:01:53.363 21:41:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721857318 00:01:53.363 21:41:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721857318 00:01:53.363 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721857318_collect-vmstat.pm.log 00:01:53.363 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721857318_collect-cpu-load.pm.log 00:01:54.298 21:41:59 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:01:54.298 21:41:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.298 21:41:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.298 21:41:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.298 21:41:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.298 Wed Jul 24 09:41:59 PM UTC 2024 00:01:54.298 21:41:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.298 v24.05-15-g241d0f3c9 00:01:54.298 21:41:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.298 21:41:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.298 21:41:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.298 21:41:59 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:54.298 21:41:59 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:54.298 21:41:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.298 ************************************ 00:01:54.298 START TEST ubsan 00:01:54.298 ************************************ 00:01:54.298 using ubsan 00:01:54.298 21:41:59 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:54.298 00:01:54.298 real 0m0.000s 00:01:54.298 user 0m0.000s 00:01:54.298 sys 0m0.000s 00:01:54.298 21:41:59 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:54.298 21:41:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.298 ************************************ 00:01:54.298 END TEST ubsan 00:01:54.298 ************************************ 00:01:54.298 21:41:59 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:54.298 21:41:59 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.298 21:41:59 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.298 21:41:59 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:54.298 21:41:59 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:54.298 21:41:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.298 ************************************ 00:01:54.298 START TEST build_native_dpdk 00:01:54.298 ************************************ 00:01:54.298 21:41:59 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:54.298 caf0f5d395 version: 22.11.4 00:01:54.298 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:54.298 dc9c799c7d vhost: fix missing spinlock unlock 00:01:54.298 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:54.298 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.298 21:41:59 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:54.298 21:41:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.299 patching file config/rte_config.h 00:01:54.299 Hunk #1 succeeded at 60 (offset 1 line). 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:54.299 21:41:59 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:54.299 patching file lib/pcapng/rte_pcapng.c 00:01:54.299 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:54.299 21:41:59 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.299 21:42:00 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.562 The Meson build system 00:01:59.562 Version: 1.3.1 00:01:59.562 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:59.562 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:59.562 Build type: native build 00:01:59.562 Program cat found: YES (/usr/bin/cat) 00:01:59.562 Project name: DPDK 00:01:59.562 Project version: 22.11.4 00:01:59.562 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.562 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:59.562 Host machine cpu family: x86_64 00:01:59.562 Host machine cpu: x86_64 00:01:59.562 Message: ## Building in Developer Mode ## 00:01:59.562 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.562 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:59.562 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.562 Program objdump found: YES (/usr/bin/objdump) 00:01:59.562 Program python3 found: YES (/usr/bin/python3) 00:01:59.562 Program cat found: YES (/usr/bin/cat) 00:01:59.562 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.562 Checking for size of "void *" : 8 00:01:59.562 Checking for size of "void *" : 8 (cached) 00:01:59.562 Library m found: YES 00:01:59.562 Library numa found: YES 00:01:59.562 Has header "numaif.h" : YES 00:01:59.562 Library fdt found: NO 00:01:59.562 Library execinfo found: NO 00:01:59.562 Has header "execinfo.h" : YES 00:01:59.562 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.562 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.562 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.562 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.562 Run-time dependency openssl found: YES 3.0.9 00:01:59.562 Run-time dependency libpcap found: YES 1.10.4 00:01:59.562 Has header "pcap.h" with dependency libpcap: YES 00:01:59.562 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.562 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.562 Compiler for C supports arguments -Wformat: YES 00:01:59.562 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.562 Compiler for C supports arguments -Wformat-security: NO 00:01:59.562 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.562 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.562 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.562 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.562 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.562 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.562 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.562 Compiler for C supports arguments -Wundef: YES 00:01:59.562 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.562 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.562 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.562 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.562 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.562 Compiler for C supports arguments -mavx512f: YES 00:01:59.562 Checking if "AVX512 checking" compiles: YES 00:01:59.563 Fetching value of define "__SSE4_2__" : 1 00:01:59.563 Fetching value of define "__AES__" : 1 00:01:59.563 Fetching value of define "__AVX__" : 1 00:01:59.563 Fetching value of define "__AVX2__" : 1 00:01:59.563 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.563 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.563 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.563 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.563 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.563 Fetching value of define "__PCLMUL__" : 1 00:01:59.563 Fetching value of define "__RDRND__" : 1 00:01:59.563 Fetching value of define "__RDSEED__" : 1 00:01:59.563 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.563 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.563 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.563 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.563 Checking for function "getentropy" : YES 00:01:59.563 Message: lib/eal: Defining dependency "eal" 00:01:59.563 Message: lib/ring: Defining dependency "ring" 00:01:59.563 Message: lib/rcu: Defining dependency "rcu" 00:01:59.563 Message: lib/mempool: Defining dependency "mempool" 00:01:59.563 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.563 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.563 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.563 Compiler for C supports arguments -mpclmul: YES 00:01:59.563 Compiler for C supports arguments -maes: YES 00:01:59.563 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.563 Compiler for C supports arguments -mavx512bw: YES 00:01:59.563 Compiler for C supports arguments -mavx512dq: YES 00:01:59.563 Compiler for C supports arguments -mavx512vl: YES 00:01:59.563 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.563 Compiler for C supports arguments -mavx2: YES 00:01:59.563 Compiler for C supports arguments -mavx: YES 00:01:59.563 Message: lib/net: Defining dependency "net" 00:01:59.563 Message: lib/meter: Defining dependency "meter" 00:01:59.563 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.563 Message: lib/pci: Defining dependency "pci" 00:01:59.563 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.563 Message: lib/metrics: Defining dependency "metrics" 00:01:59.563 Message: lib/hash: Defining dependency "hash" 00:01:59.563 Message: lib/timer: Defining dependency "timer" 00:01:59.563 Fetching value of define "__AVX2__" : 1 (cached) 00:01:59.563 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:59.563 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:59.563 Message: lib/acl: Defining dependency "acl" 00:01:59.563 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.563 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.563 Run-time dependency libelf found: YES 0.190 00:01:59.563 Message: lib/bpf: Defining dependency "bpf" 00:01:59.563 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.563 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.563 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.563 Message: lib/distributor: Defining dependency "distributor" 00:01:59.563 Message: lib/efd: Defining dependency "efd" 00:01:59.563 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.563 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.563 Message: lib/gro: Defining dependency "gro" 00:01:59.563 Message: lib/gso: Defining dependency "gso" 00:01:59.563 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.563 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.563 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.563 Message: lib/lpm: Defining dependency "lpm" 00:01:59.563 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.563 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.563 Message: lib/member: Defining dependency "member" 00:01:59.563 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.563 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.563 Message: lib/power: Defining dependency "power" 00:01:59.563 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.563 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.563 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.563 Message: lib/rib: Defining dependency "rib" 00:01:59.563 Message: lib/reorder: Defining dependency "reorder" 00:01:59.563 Message: lib/sched: Defining dependency "sched" 00:01:59.563 Message: lib/security: Defining dependency "security" 00:01:59.563 Message: lib/stack: Defining dependency "stack" 00:01:59.563 Has header "linux/userfaultfd.h" : YES 00:01:59.563 Message: lib/vhost: Defining dependency "vhost" 00:01:59.563 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.563 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.563 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.563 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:59.563 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:59.563 Message: lib/fib: Defining dependency "fib" 00:01:59.563 Message: lib/port: Defining dependency "port" 00:01:59.563 Message: lib/pdump: Defining dependency "pdump" 00:01:59.563 Message: lib/table: Defining dependency "table" 00:01:59.563 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.563 Message: lib/graph: Defining dependency "graph" 00:01:59.563 Message: lib/node: Defining dependency "node" 00:01:59.563 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.563 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.563 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.563 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.563 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:59.563 Compiler for C supports arguments -Wno-unused-value: YES 00:01:59.563 Compiler for C supports arguments -Wno-format: YES 00:01:59.563 Compiler for C supports arguments -Wno-format-security: YES 00:01:59.563 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.937 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.937 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.937 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.938 Fetching value of define "__AVX2__" : 1 (cached) 00:02:00.938 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.938 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.938 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.938 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.938 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.938 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.938 Configuring doxy-api.conf using configuration 00:02:00.938 Program sphinx-build found: NO 00:02:00.938 Configuring rte_build_config.h using configuration 00:02:00.938 Message: 00:02:00.938 ================= 00:02:00.938 Applications Enabled 00:02:00.938 ================= 00:02:00.938 00:02:00.938 apps: 00:02:00.938 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:00.938 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:00.938 test-security-perf, 00:02:00.938 00:02:00.938 Message: 00:02:00.938 ================= 00:02:00.938 Libraries Enabled 00:02:00.938 ================= 00:02:00.938 00:02:00.938 libs: 00:02:00.938 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:00.938 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:00.938 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:00.938 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:00.938 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:00.938 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:00.938 table, pipeline, graph, node, 00:02:00.938 00:02:00.938 Message: 00:02:00.938 =============== 00:02:00.938 Drivers Enabled 00:02:00.938 =============== 00:02:00.938 00:02:00.938 common: 00:02:00.938 00:02:00.938 bus: 00:02:00.938 pci, vdev, 00:02:00.938 mempool: 00:02:00.938 ring, 00:02:00.938 dma: 00:02:00.938 00:02:00.938 net: 00:02:00.938 i40e, 00:02:00.938 raw: 00:02:00.938 00:02:00.938 crypto: 00:02:00.938 00:02:00.938 compress: 00:02:00.938 00:02:00.938 regex: 00:02:00.938 00:02:00.938 vdpa: 00:02:00.938 00:02:00.938 event: 00:02:00.938 00:02:00.938 baseband: 00:02:00.938 00:02:00.938 gpu: 00:02:00.938 00:02:00.938 00:02:00.938 Message: 00:02:00.938 ================= 00:02:00.938 Content Skipped 00:02:00.938 ================= 00:02:00.938 00:02:00.938 apps: 00:02:00.938 00:02:00.938 libs: 00:02:00.938 kni: explicitly disabled via build config (deprecated lib) 00:02:00.938 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:00.938 00:02:00.938 drivers: 00:02:00.938 common/cpt: not in enabled drivers build config 00:02:00.938 common/dpaax: not in enabled drivers build config 00:02:00.938 common/iavf: not in enabled drivers build config 00:02:00.938 common/idpf: not in enabled drivers build config 00:02:00.938 common/mvep: not in enabled drivers build config 00:02:00.938 common/octeontx: not in enabled drivers build config 00:02:00.938 bus/auxiliary: not in enabled drivers build config 00:02:00.938 bus/dpaa: not in enabled drivers build config 00:02:00.938 bus/fslmc: not in enabled drivers build config 00:02:00.938 bus/ifpga: not in enabled drivers build config 00:02:00.938 bus/vmbus: not in enabled drivers build config 00:02:00.938 common/cnxk: not in enabled drivers build config 00:02:00.938 common/mlx5: not in enabled drivers build config 00:02:00.938 common/qat: not in enabled drivers build config 00:02:00.938 common/sfc_efx: not in enabled drivers build config 00:02:00.938 mempool/bucket: not in enabled drivers build config 00:02:00.938 mempool/cnxk: not in enabled drivers build config 00:02:00.938 mempool/dpaa: not in enabled drivers build config 00:02:00.938 mempool/dpaa2: not in enabled drivers build config 00:02:00.938 mempool/octeontx: not in enabled drivers build config 00:02:00.938 mempool/stack: not in enabled drivers build config 00:02:00.938 dma/cnxk: not in enabled drivers build config 00:02:00.938 dma/dpaa: not in enabled drivers build config 00:02:00.938 dma/dpaa2: not in enabled drivers build config 00:02:00.938 dma/hisilicon: not in enabled drivers build config 00:02:00.938 dma/idxd: not in enabled drivers build config 00:02:00.938 dma/ioat: not in enabled drivers build config 00:02:00.938 dma/skeleton: not in enabled drivers build config 00:02:00.938 net/af_packet: not in enabled drivers build config 00:02:00.938 net/af_xdp: not in enabled drivers build config 00:02:00.938 net/ark: not in enabled drivers build config 00:02:00.938 net/atlantic: not in enabled drivers build config 00:02:00.938 net/avp: not in enabled drivers build config 00:02:00.938 net/axgbe: not in enabled drivers build config 00:02:00.938 net/bnx2x: not in enabled drivers build config 00:02:00.938 net/bnxt: not in enabled drivers build config 00:02:00.938 net/bonding: not in enabled drivers build config 00:02:00.938 net/cnxk: not in enabled drivers build config 00:02:00.938 net/cxgbe: not in enabled drivers build config 00:02:00.938 net/dpaa: not in enabled drivers build config 00:02:00.938 net/dpaa2: not in enabled drivers build config 00:02:00.938 net/e1000: not in enabled drivers build config 00:02:00.938 net/ena: not in enabled drivers build config 00:02:00.938 net/enetc: not in enabled drivers build config 00:02:00.938 net/enetfec: not in enabled drivers build config 00:02:00.938 net/enic: not in enabled drivers build config 00:02:00.938 net/failsafe: not in enabled drivers build config 00:02:00.938 net/fm10k: not in enabled drivers build config 00:02:00.938 net/gve: not in enabled drivers build config 00:02:00.938 net/hinic: not in enabled drivers build config 00:02:00.938 net/hns3: not in enabled drivers build config 00:02:00.938 net/iavf: not in enabled drivers build config 00:02:00.938 net/ice: not in enabled drivers build config 00:02:00.938 net/idpf: not in enabled drivers build config 00:02:00.938 net/igc: not in enabled drivers build config 00:02:00.938 net/ionic: not in enabled drivers build config 00:02:00.938 net/ipn3ke: not in enabled drivers build config 00:02:00.938 net/ixgbe: not in enabled drivers build config 00:02:00.938 net/kni: not in enabled drivers build config 00:02:00.938 net/liquidio: not in enabled drivers build config 00:02:00.938 net/mana: not in enabled drivers build config 00:02:00.938 net/memif: not in enabled drivers build config 00:02:00.938 net/mlx4: not in enabled drivers build config 00:02:00.938 net/mlx5: not in enabled drivers build config 00:02:00.938 net/mvneta: not in enabled drivers build config 00:02:00.938 net/mvpp2: not in enabled drivers build config 00:02:00.938 net/netvsc: not in enabled drivers build config 00:02:00.938 net/nfb: not in enabled drivers build config 00:02:00.938 net/nfp: not in enabled drivers build config 00:02:00.938 net/ngbe: not in enabled drivers build config 00:02:00.938 net/null: not in enabled drivers build config 00:02:00.938 net/octeontx: not in enabled drivers build config 00:02:00.938 net/octeon_ep: not in enabled drivers build config 00:02:00.938 net/pcap: not in enabled drivers build config 00:02:00.938 net/pfe: not in enabled drivers build config 00:02:00.938 net/qede: not in enabled drivers build config 00:02:00.938 net/ring: not in enabled drivers build config 00:02:00.938 net/sfc: not in enabled drivers build config 00:02:00.938 net/softnic: not in enabled drivers build config 00:02:00.938 net/tap: not in enabled drivers build config 00:02:00.938 net/thunderx: not in enabled drivers build config 00:02:00.938 net/txgbe: not in enabled drivers build config 00:02:00.938 net/vdev_netvsc: not in enabled drivers build config 00:02:00.938 net/vhost: not in enabled drivers build config 00:02:00.938 net/virtio: not in enabled drivers build config 00:02:00.938 net/vmxnet3: not in enabled drivers build config 00:02:00.938 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.938 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.938 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.938 raw/ifpga: not in enabled drivers build config 00:02:00.938 raw/ntb: not in enabled drivers build config 00:02:00.938 raw/skeleton: not in enabled drivers build config 00:02:00.938 crypto/armv8: not in enabled drivers build config 00:02:00.938 crypto/bcmfs: not in enabled drivers build config 00:02:00.938 crypto/caam_jr: not in enabled drivers build config 00:02:00.938 crypto/ccp: not in enabled drivers build config 00:02:00.938 crypto/cnxk: not in enabled drivers build config 00:02:00.938 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.938 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.938 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.938 crypto/mlx5: not in enabled drivers build config 00:02:00.938 crypto/mvsam: not in enabled drivers build config 00:02:00.938 crypto/nitrox: not in enabled drivers build config 00:02:00.938 crypto/null: not in enabled drivers build config 00:02:00.938 crypto/octeontx: not in enabled drivers build config 00:02:00.938 crypto/openssl: not in enabled drivers build config 00:02:00.938 crypto/scheduler: not in enabled drivers build config 00:02:00.938 crypto/uadk: not in enabled drivers build config 00:02:00.938 crypto/virtio: not in enabled drivers build config 00:02:00.938 compress/isal: not in enabled drivers build config 00:02:00.938 compress/mlx5: not in enabled drivers build config 00:02:00.938 compress/octeontx: not in enabled drivers build config 00:02:00.938 compress/zlib: not in enabled drivers build config 00:02:00.938 regex/mlx5: not in enabled drivers build config 00:02:00.938 regex/cn9k: not in enabled drivers build config 00:02:00.938 vdpa/ifc: not in enabled drivers build config 00:02:00.938 vdpa/mlx5: not in enabled drivers build config 00:02:00.938 vdpa/sfc: not in enabled drivers build config 00:02:00.938 event/cnxk: not in enabled drivers build config 00:02:00.938 event/dlb2: not in enabled drivers build config 00:02:00.938 event/dpaa: not in enabled drivers build config 00:02:00.938 event/dpaa2: not in enabled drivers build config 00:02:00.938 event/dsw: not in enabled drivers build config 00:02:00.938 event/opdl: not in enabled drivers build config 00:02:00.938 event/skeleton: not in enabled drivers build config 00:02:00.938 event/sw: not in enabled drivers build config 00:02:00.938 event/octeontx: not in enabled drivers build config 00:02:00.938 baseband/acc: not in enabled drivers build config 00:02:00.938 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.938 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.938 baseband/la12xx: not in enabled drivers build config 00:02:00.938 baseband/null: not in enabled drivers build config 00:02:00.938 baseband/turbo_sw: not in enabled drivers build config 00:02:00.938 gpu/cuda: not in enabled drivers build config 00:02:00.938 00:02:00.938 00:02:00.938 Build targets in project: 314 00:02:00.938 00:02:00.939 DPDK 22.11.4 00:02:00.939 00:02:00.939 User defined options 00:02:00.939 libdir : lib 00:02:00.939 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:00.939 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.939 c_link_args : 00:02:00.939 enable_docs : false 00:02:00.939 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.939 enable_kmods : false 00:02:00.939 machine : native 00:02:00.939 tests : false 00:02:00.939 00:02:00.939 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.939 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.939 21:42:06 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:00.939 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:00.939 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:00.939 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:00.939 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:00.939 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:00.939 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.939 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.197 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.197 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.197 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.197 [10/743] Linking static target lib/librte_kvargs.a 00:02:01.197 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.197 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.197 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.197 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.197 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.197 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.197 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.197 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.456 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.456 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.456 [21/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.456 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:01.456 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:01.456 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.456 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.456 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.456 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.726 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.726 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.726 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.726 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.726 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.726 [33/743] Linking static target lib/librte_telemetry.a 00:02:01.726 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.726 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.726 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.726 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.726 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:01.726 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.726 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.726 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.038 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.038 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.038 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.038 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.038 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:02.038 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.038 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.038 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.038 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:02.038 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.296 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.296 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.296 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.296 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.296 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.296 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.296 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.296 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.296 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.296 [61/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.296 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.296 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.296 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.296 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:02.296 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.555 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.555 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.555 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.555 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.555 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.555 [72/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.555 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.555 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.555 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.555 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.555 [77/743] Generating lib/rte_eal_def with a custom command 00:02:02.555 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:02.555 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.555 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.555 [81/743] Generating lib/rte_ring_def with a custom command 00:02:02.555 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.555 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:02.555 [84/743] Generating lib/rte_rcu_mingw with a custom command 00:02:02.555 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:02.555 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.814 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.814 [88/743] Linking static target lib/librte_ring.a 00:02:02.814 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.814 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:02.814 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:02.814 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.073 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.073 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.073 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.073 [96/743] Linking static target lib/librte_eal.a 00:02:03.333 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.333 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:03.333 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.333 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:03.333 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.333 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.592 [103/743] Linking static target lib/librte_rcu.a 00:02:03.592 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.592 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.592 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.592 [107/743] Linking static target lib/librte_mempool.a 00:02:03.850 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.850 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.850 [110/743] Generating lib/rte_net_def with a custom command 00:02:03.850 [111/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.850 [112/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.850 [113/743] Generating lib/rte_net_mingw with a custom command 00:02:03.850 [114/743] Generating lib/rte_meter_def with a custom command 00:02:03.850 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:04.110 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.110 [117/743] Linking static target lib/librte_meter.a 00:02:04.110 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.110 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.367 [120/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.367 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.367 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.367 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.367 [124/743] Linking static target lib/librte_net.a 00:02:04.367 [125/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.626 [126/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.626 [127/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.626 [128/743] Linking static target lib/librte_mbuf.a 00:02:04.885 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.885 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.885 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.885 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.885 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.143 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.143 [135/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.729 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.729 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:05.729 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.729 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:05.729 [140/743] Generating lib/rte_pci_def with a custom command 00:02:05.729 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:05.729 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.729 [143/743] Linking static target lib/librte_pci.a 00:02:05.729 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.729 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.729 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.729 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.988 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.988 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.988 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.988 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.988 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.988 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.988 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.988 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.988 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.988 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:05.988 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:06.246 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.246 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:06.246 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:06.246 [162/743] Generating lib/rte_metrics_def with a custom command 00:02:06.246 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:06.246 [164/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:06.246 [165/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.246 [166/743] Generating lib/rte_hash_def with a custom command 00:02:06.246 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:06.246 [168/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.246 [169/743] Generating lib/rte_timer_def with a custom command 00:02:06.246 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:06.505 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.505 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:06.505 [173/743] Linking static target lib/librte_cmdline.a 00:02:06.763 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:06.763 [175/743] Linking static target lib/librte_metrics.a 00:02:06.763 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:06.763 [177/743] Linking static target lib/librte_timer.a 00:02:07.022 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.280 [179/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.280 [180/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.280 [181/743] Linking static target lib/librte_ethdev.a 00:02:07.280 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:07.280 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.280 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.846 [185/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:07.846 [186/743] Generating lib/rte_acl_def with a custom command 00:02:07.846 [187/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:07.846 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:07.846 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:07.846 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:08.104 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:08.104 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:08.104 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:08.671 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:08.671 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:08.671 [196/743] Linking static target lib/librte_bitratestats.a 00:02:08.671 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:08.930 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.930 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:08.930 [200/743] Linking static target lib/librte_bbdev.a 00:02:09.189 [201/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.189 [202/743] Linking static target lib/librte_hash.a 00:02:09.189 [203/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:09.190 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:09.448 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:09.448 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.448 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:09.448 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.448 [209/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:09.707 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.707 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:09.707 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:09.995 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.995 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:09.995 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:10.260 [216/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:10.260 [217/743] Linking static target lib/librte_cfgfile.a 00:02:10.260 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:10.260 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:10.260 [220/743] Linking static target lib/librte_acl.a 00:02:10.260 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:10.260 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:10.260 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:10.261 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:10.519 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.519 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.519 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.519 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:10.778 [229/743] Generating lib/rte_cryptodev_def with a custom command 00:02:10.778 [230/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:10.778 [231/743] Linking target lib/librte_eal.so.23.0 00:02:10.778 [232/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.778 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.778 [234/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:10.778 [235/743] Linking target lib/librte_ring.so.23.0 00:02:10.778 [236/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:11.036 [237/743] Linking target lib/librte_meter.so.23.0 00:02:11.036 [238/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.036 [239/743] Linking target lib/librte_pci.so.23.0 00:02:11.036 [240/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:11.036 [241/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.036 [242/743] Linking target lib/librte_rcu.so.23.0 00:02:11.036 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:11.036 [244/743] Linking target lib/librte_mempool.so.23.0 00:02:11.036 [245/743] Linking target lib/librte_timer.so.23.0 00:02:11.036 [246/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:11.036 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:11.036 [248/743] Linking static target lib/librte_bpf.a 00:02:11.295 [249/743] Linking target lib/librte_acl.so.23.0 00:02:11.295 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:11.295 [251/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:11.295 [252/743] Linking target lib/librte_mbuf.so.23.0 00:02:11.295 [253/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.295 [254/743] Linking static target lib/librte_compressdev.a 00:02:11.295 [255/743] Linking target lib/librte_cfgfile.so.23.0 00:02:11.295 [256/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:11.295 [257/743] Generating lib/rte_distributor_def with a custom command 00:02:11.295 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:11.295 [259/743] Generating lib/rte_efd_def with a custom command 00:02:11.295 [260/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:11.295 [261/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:11.295 [262/743] Generating lib/rte_efd_mingw with a custom command 00:02:11.295 [263/743] Linking target lib/librte_bbdev.so.23.0 00:02:11.295 [264/743] Linking target lib/librte_net.so.23.0 00:02:11.555 [265/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.555 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:11.555 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:11.555 [268/743] Linking target lib/librte_cmdline.so.23.0 00:02:11.555 [269/743] Linking target lib/librte_hash.so.23.0 00:02:11.813 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:11.813 [271/743] Linking static target lib/librte_distributor.a 00:02:11.813 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:11.813 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.072 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.072 [275/743] Linking target lib/librte_ethdev.so.23.0 00:02:12.072 [276/743] Linking target lib/librte_distributor.so.23.0 00:02:12.072 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:12.072 [278/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:12.072 [279/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.072 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:12.072 [281/743] Linking target lib/librte_metrics.so.23.0 00:02:12.072 [282/743] Linking target lib/librte_bpf.so.23.0 00:02:12.072 [283/743] Linking target lib/librte_compressdev.so.23.0 00:02:12.331 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:12.331 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:12.331 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:12.331 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:12.331 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:12.331 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:12.331 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:12.589 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:12.848 [292/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.848 [293/743] Linking static target lib/librte_cryptodev.a 00:02:12.848 [294/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:12.848 [295/743] Linking static target lib/librte_efd.a 00:02:12.848 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:13.106 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.106 [298/743] Linking target lib/librte_efd.so.23.0 00:02:13.364 [299/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:13.364 [300/743] Linking static target lib/librte_gpudev.a 00:02:13.364 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:13.364 [302/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:13.364 [303/743] Generating lib/rte_gro_def with a custom command 00:02:13.364 [304/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:13.364 [305/743] Generating lib/rte_gro_mingw with a custom command 00:02:13.364 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:13.622 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:13.622 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:13.906 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:13.906 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:13.906 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:14.219 [312/743] Generating lib/rte_gso_def with a custom command 00:02:14.219 [313/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.219 [314/743] Generating lib/rte_gso_mingw with a custom command 00:02:14.219 [315/743] Linking target lib/librte_gpudev.so.23.0 00:02:14.219 [316/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:14.219 [317/743] Linking static target lib/librte_gro.a 00:02:14.219 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:14.219 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.219 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:14.478 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:14.478 [322/743] Linking target lib/librte_gro.so.23.0 00:02:14.478 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:14.478 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:14.478 [325/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:14.478 [326/743] Linking static target lib/librte_jobstats.a 00:02:14.478 [327/743] Generating lib/rte_jobstats_def with a custom command 00:02:14.736 [328/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:14.736 [329/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:14.736 [330/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:14.736 [331/743] Linking static target lib/librte_gso.a 00:02:14.736 [332/743] Linking static target lib/librte_eventdev.a 00:02:14.736 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:14.736 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.736 [335/743] Linking target lib/librte_gso.so.23.0 00:02:14.995 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:14.995 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:14.995 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:14.995 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.995 [340/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.995 [341/743] Linking target lib/librte_jobstats.so.23.0 00:02:14.995 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:14.995 [343/743] Linking target lib/librte_cryptodev.so.23.0 00:02:14.995 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:14.995 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:14.995 [346/743] Generating lib/rte_lpm_mingw with a custom command 00:02:15.254 [347/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:15.254 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:15.254 [349/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:15.254 [350/743] Linking static target lib/librte_ip_frag.a 00:02:15.513 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.513 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:15.513 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:15.513 [354/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:15.513 [355/743] Linking static target lib/librte_latencystats.a 00:02:15.771 [356/743] Generating lib/rte_member_def with a custom command 00:02:15.771 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:15.771 [358/743] Generating lib/rte_member_mingw with a custom command 00:02:15.771 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:15.771 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:15.771 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:15.771 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:15.771 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.771 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.771 [365/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:15.771 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:16.030 [367/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:16.030 [368/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.030 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.030 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.287 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:16.287 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:16.287 [373/743] Generating lib/rte_power_def with a custom command 00:02:16.287 [374/743] Generating lib/rte_power_mingw with a custom command 00:02:16.545 [375/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:16.545 [376/743] Linking static target lib/librte_lpm.a 00:02:16.545 [377/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.545 [378/743] Generating lib/rte_rawdev_def with a custom command 00:02:16.545 [379/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:16.545 [380/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.545 [381/743] Linking target lib/librte_eventdev.so.23.0 00:02:16.545 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.804 [383/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:16.804 [384/743] Linking static target lib/librte_pcapng.a 00:02:16.804 [385/743] Generating lib/rte_regexdev_def with a custom command 00:02:16.804 [386/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:16.804 [387/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:16.804 [388/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:16.804 [389/743] Linking static target lib/librte_rawdev.a 00:02:16.804 [390/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.804 [391/743] Generating lib/rte_dmadev_def with a custom command 00:02:16.804 [392/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:16.804 [393/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:16.804 [394/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.804 [395/743] Linking target lib/librte_lpm.so.23.0 00:02:16.804 [396/743] Generating lib/rte_rib_def with a custom command 00:02:16.804 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:17.063 [398/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:17.063 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:17.063 [400/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.063 [401/743] Linking static target lib/librte_power.a 00:02:17.063 [402/743] Generating lib/rte_reorder_mingw with a custom command 00:02:17.063 [403/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.063 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:17.063 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.063 [406/743] Linking static target lib/librte_dmadev.a 00:02:17.321 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:17.321 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.321 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:17.321 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:17.321 [411/743] Linking static target lib/librte_regexdev.a 00:02:17.321 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:17.321 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:17.580 [414/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:17.580 [415/743] Generating lib/rte_sched_def with a custom command 00:02:17.580 [416/743] Linking static target lib/librte_member.a 00:02:17.580 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:17.580 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:17.580 [419/743] Generating lib/rte_security_def with a custom command 00:02:17.580 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:17.580 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:17.580 [422/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:17.851 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.851 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:17.851 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:17.851 [426/743] Linking target lib/librte_dmadev.so.23.0 00:02:17.851 [427/743] Linking static target lib/librte_stack.a 00:02:17.851 [428/743] Generating lib/rte_stack_def with a custom command 00:02:17.851 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:17.851 [430/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.851 [431/743] Linking static target lib/librte_reorder.a 00:02:17.851 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.851 [433/743] Linking target lib/librte_member.so.23.0 00:02:17.851 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:17.851 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.851 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.851 [437/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.112 [438/743] Linking target lib/librte_stack.so.23.0 00:02:18.112 [439/743] Linking target lib/librte_power.so.23.0 00:02:18.112 [440/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.112 [441/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.112 [442/743] Linking target lib/librte_reorder.so.23.0 00:02:18.112 [443/743] Linking target lib/librte_regexdev.so.23.0 00:02:18.112 [444/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:18.112 [445/743] Linking static target lib/librte_rib.a 00:02:18.371 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.371 [447/743] Linking static target lib/librte_security.a 00:02:18.630 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.630 [449/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.630 [450/743] Linking target lib/librte_rib.so.23.0 00:02:18.630 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:18.630 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:18.630 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.888 [454/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.888 [455/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:18.888 [456/743] Linking target lib/librte_security.so.23.0 00:02:18.888 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.888 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:18.888 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:18.888 [460/743] Linking static target lib/librte_sched.a 00:02:19.454 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.454 [462/743] Linking target lib/librte_sched.so.23.0 00:02:19.454 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.454 [464/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:19.454 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:19.454 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:19.713 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:19.713 [468/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:19.713 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.713 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:19.713 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:20.279 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:20.279 [473/743] Generating lib/rte_fib_def with a custom command 00:02:20.279 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:20.279 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:20.279 [476/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:20.279 [477/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:20.279 [478/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:20.279 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:20.537 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:20.537 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:20.537 [482/743] Linking static target lib/librte_ipsec.a 00:02:20.795 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.053 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:21.053 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:21.053 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:21.053 [487/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:21.053 [488/743] Linking static target lib/librte_fib.a 00:02:21.311 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:21.311 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:21.311 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:21.570 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.570 [493/743] Linking target lib/librte_fib.so.23.0 00:02:21.827 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:22.087 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:22.087 [496/743] Generating lib/rte_port_def with a custom command 00:02:22.087 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:22.087 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:22.087 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:22.346 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:22.346 [501/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:22.346 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:22.346 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:22.346 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:22.603 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:22.603 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:22.603 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:22.603 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:22.861 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:22.861 [510/743] Linking static target lib/librte_port.a 00:02:23.119 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:23.119 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:23.119 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:23.377 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.377 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:23.377 [516/743] Linking target lib/librte_port.so.23.0 00:02:23.377 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:23.377 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:23.377 [519/743] Linking static target lib/librte_pdump.a 00:02:23.635 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:23.635 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.635 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:23.893 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:23.893 [524/743] Generating lib/rte_table_def with a custom command 00:02:24.151 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:24.151 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:24.151 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:24.151 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:24.151 [529/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.409 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:24.409 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:24.409 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:24.409 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:24.666 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:24.666 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:24.666 [536/743] Linking static target lib/librte_table.a 00:02:24.666 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:25.274 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:25.274 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:25.274 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.274 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:25.274 [542/743] Linking target lib/librte_table.so.23.0 00:02:25.274 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:25.274 [544/743] Generating lib/rte_graph_def with a custom command 00:02:25.533 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:25.533 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:25.533 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:25.790 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:26.048 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:26.048 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:26.048 [551/743] Linking static target lib/librte_graph.a 00:02:26.048 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:26.306 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:26.306 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:26.306 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:26.873 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:26.873 [557/743] Generating lib/rte_node_def with a custom command 00:02:26.873 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:26.873 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.873 [560/743] Linking target lib/librte_graph.so.23.0 00:02:26.873 [561/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:26.873 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.873 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:26.873 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:26.873 [565/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:27.140 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.140 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.140 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:27.140 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:27.140 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.140 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.140 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:27.140 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:27.140 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:27.140 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:27.140 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:27.400 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.400 [578/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:27.400 [579/743] Linking static target lib/librte_node.a 00:02:27.400 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.400 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.658 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.658 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.658 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.658 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:27.658 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.658 [587/743] Linking target lib/librte_node.so.23.0 00:02:27.658 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.658 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.918 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.918 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.918 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.918 [593/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:27.918 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:27.918 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.176 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:28.176 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:28.176 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:28.176 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.176 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:28.434 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:28.434 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:28.434 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.434 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.694 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.694 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.694 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:28.694 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.694 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:28.694 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:29.261 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:29.520 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:29.520 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:29.520 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:30.087 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:30.087 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:30.087 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:30.655 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:30.913 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:30.914 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.914 [621/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:30.914 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:30.914 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:30.914 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:31.172 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:32.109 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:32.367 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:32.367 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:32.367 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:32.626 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:32.626 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:32.626 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:32.626 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:32.626 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:32.885 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:33.144 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:33.403 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:33.681 [638/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:33.681 [639/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:33.681 [640/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:33.939 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:33.939 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:33.939 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:33.939 [644/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:33.939 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:34.197 [646/743] Linking static target drivers/librte_net_i40e.a 00:02:34.197 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:34.197 [648/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.197 [649/743] Linking static target lib/librte_vhost.a 00:02:34.456 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:34.456 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:34.715 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:34.715 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.715 [654/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:34.973 [655/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:34.973 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:34.973 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:35.540 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:35.540 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:35.540 [660/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.540 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:35.540 [662/743] Linking target lib/librte_vhost.so.23.0 00:02:35.540 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:35.799 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:35.799 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:35.799 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:35.799 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:36.058 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:36.317 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:36.317 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:36.575 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:36.575 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:36.834 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:37.093 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:37.352 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:37.611 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:37.611 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:37.611 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:37.611 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:37.868 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:37.868 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:38.435 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:38.435 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:38.435 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:38.435 [685/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:38.435 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:38.693 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:38.693 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:38.693 [689/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:38.952 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:38.952 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:38.952 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:38.952 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:39.211 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:39.469 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:39.728 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:39.728 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:39.986 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:39.986 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:40.553 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:40.553 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:40.553 [702/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:40.812 [703/743] Linking static target lib/librte_pipeline.a 00:02:40.812 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:40.812 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:41.071 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:41.071 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:41.071 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:41.330 [709/743] Linking target app/dpdk-dumpcap 00:02:41.330 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:41.330 [711/743] Linking target app/dpdk-pdump 00:02:41.330 [712/743] Linking target app/dpdk-proc-info 00:02:41.330 [713/743] Linking target app/dpdk-test-acl 00:02:41.589 [714/743] Linking target app/dpdk-test-bbdev 00:02:41.847 [715/743] Linking target app/dpdk-test-cmdline 00:02:41.847 [716/743] Linking target app/dpdk-test-crypto-perf 00:02:41.847 [717/743] Linking target app/dpdk-test-eventdev 00:02:41.847 [718/743] Linking target app/dpdk-test-fib 00:02:41.847 [719/743] Linking target app/dpdk-test-compress-perf 00:02:42.117 [720/743] Linking target app/dpdk-test-flow-perf 00:02:42.117 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:42.117 [722/743] Linking target app/dpdk-test-gpudev 00:02:42.377 [723/743] Linking target app/dpdk-test-pipeline 00:02:42.377 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:42.944 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:42.944 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:42.944 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.944 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:42.944 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:43.202 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:43.460 [731/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:43.460 [732/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.718 [733/743] Linking target lib/librte_pipeline.so.23.0 00:02:43.718 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:43.718 [735/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:43.976 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:43.976 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:44.235 [738/743] Linking target app/dpdk-test-sad 00:02:44.235 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:44.235 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:44.235 [741/743] Linking target app/dpdk-test-regex 00:02:44.803 [742/743] Linking target app/dpdk-testpmd 00:02:44.803 [743/743] Linking target app/dpdk-test-security-perf 00:02:44.803 21:42:50 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:44.803 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:44.803 [0/1] Installing files. 00:02:45.370 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.370 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.371 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.372 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:45.373 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:45.374 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.375 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.375 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.375 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.376 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.376 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.376 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:45.376 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:45.376 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.376 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.637 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.638 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.639 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:45.640 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:45.640 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:45.640 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:45.640 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:45.640 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:45.640 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:45.640 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:45.640 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:45.640 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:45.640 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:45.640 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:45.640 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:45.640 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:45.640 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:45.640 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:45.640 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:45.640 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:45.640 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:45.640 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:45.640 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:45.640 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:45.640 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:45.640 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:45.640 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:45.640 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:45.640 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:45.640 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:45.640 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:45.640 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:45.640 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:45.640 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:45.640 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:45.640 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:45.640 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:45.640 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:45.640 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:45.640 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:45.640 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:45.640 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:45.640 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:45.640 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:45.640 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:45.640 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:45.640 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:45.640 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:45.640 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:45.640 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:45.640 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:45.640 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:45.640 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:45.640 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:45.640 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:45.640 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:45.640 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:45.640 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:45.640 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:45.640 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:45.640 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:45.640 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:45.641 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:45.641 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:45.641 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:45.641 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:45.641 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:45.641 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:45.641 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:45.641 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:45.641 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:45.641 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:45.641 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:45.641 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:45.641 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:45.641 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:45.641 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:45.641 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:45.641 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:45.641 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:45.641 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:45.641 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:45.641 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:45.641 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:45.641 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:45.641 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:45.641 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:45.641 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:45.641 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:45.641 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:45.641 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:45.641 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:45.641 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:45.641 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:45.641 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:45.641 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:45.641 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:45.641 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:45.641 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:45.641 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:45.641 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:45.641 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:45.641 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:45.641 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:45.641 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:45.641 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:45.641 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:45.641 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:45.641 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:45.641 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:45.641 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:45.641 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:45.641 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:45.641 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:45.641 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:45.641 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:45.641 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:45.641 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:45.641 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:45.641 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:45.641 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:45.641 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:45.641 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:45.641 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:45.641 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:45.641 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:45.641 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:45.641 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:45.641 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:45.641 21:42:51 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:45.641 21:42:51 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:45.641 21:42:51 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:45.641 ************************************ 00:02:45.641 END TEST build_native_dpdk 00:02:45.641 ************************************ 00:02:45.641 21:42:51 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:45.641 00:02:45.641 real 0m51.401s 00:02:45.641 user 6m5.432s 00:02:45.641 sys 0m59.453s 00:02:45.641 21:42:51 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:45.641 21:42:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:45.899 21:42:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.899 21:42:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:45.899 21:42:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:45.899 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:46.157 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:46.157 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:46.157 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:46.416 Using 'verbs' RDMA provider 00:03:00.032 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:14.968 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:14.968 Creating mk/config.mk...done. 00:03:14.968 Creating mk/cc.flags.mk...done. 00:03:14.968 Type 'make' to build. 00:03:14.968 21:43:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:14.968 21:43:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:14.968 21:43:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:14.968 21:43:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.968 ************************************ 00:03:14.968 START TEST make 00:03:14.968 ************************************ 00:03:14.968 21:43:18 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:14.968 make[1]: Nothing to be done for 'all'. 00:03:36.891 CC lib/ut_mock/mock.o 00:03:36.891 CC lib/log/log.o 00:03:36.891 CC lib/log/log_flags.o 00:03:36.891 CC lib/log/log_deprecated.o 00:03:36.891 CC lib/ut/ut.o 00:03:36.891 LIB libspdk_log.a 00:03:36.891 LIB libspdk_ut_mock.a 00:03:36.891 LIB libspdk_ut.a 00:03:36.891 SO libspdk_log.so.7.0 00:03:36.891 SO libspdk_ut_mock.so.6.0 00:03:36.891 SO libspdk_ut.so.2.0 00:03:36.891 SYMLINK libspdk_ut.so 00:03:36.891 SYMLINK libspdk_log.so 00:03:36.891 SYMLINK libspdk_ut_mock.so 00:03:37.150 CC lib/dma/dma.o 00:03:37.150 CXX lib/trace_parser/trace.o 00:03:37.150 CC lib/util/base64.o 00:03:37.150 CC lib/util/cpuset.o 00:03:37.150 CC lib/util/crc16.o 00:03:37.150 CC lib/util/bit_array.o 00:03:37.150 CC lib/util/crc32.o 00:03:37.150 CC lib/util/crc32c.o 00:03:37.150 CC lib/ioat/ioat.o 00:03:37.150 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.150 CC lib/util/crc32_ieee.o 00:03:37.150 CC lib/vfio_user/host/vfio_user.o 00:03:37.150 CC lib/util/crc64.o 00:03:37.150 CC lib/util/dif.o 00:03:37.150 LIB libspdk_dma.a 00:03:37.150 CC lib/util/fd.o 00:03:37.408 SO libspdk_dma.so.4.0 00:03:37.408 CC lib/util/file.o 00:03:37.408 SYMLINK libspdk_dma.so 00:03:37.408 CC lib/util/hexlify.o 00:03:37.408 CC lib/util/iov.o 00:03:37.408 LIB libspdk_ioat.a 00:03:37.408 CC lib/util/math.o 00:03:37.408 SO libspdk_ioat.so.7.0 00:03:37.408 CC lib/util/pipe.o 00:03:37.408 CC lib/util/strerror_tls.o 00:03:37.408 LIB libspdk_vfio_user.a 00:03:37.408 CC lib/util/string.o 00:03:37.408 SYMLINK libspdk_ioat.so 00:03:37.408 SO libspdk_vfio_user.so.5.0 00:03:37.408 CC lib/util/uuid.o 00:03:37.408 CC lib/util/fd_group.o 00:03:37.666 SYMLINK libspdk_vfio_user.so 00:03:37.666 CC lib/util/xor.o 00:03:37.666 CC lib/util/zipf.o 00:03:37.666 LIB libspdk_util.a 00:03:37.924 SO libspdk_util.so.9.0 00:03:37.924 SYMLINK libspdk_util.so 00:03:37.924 LIB libspdk_trace_parser.a 00:03:38.182 SO libspdk_trace_parser.so.5.0 00:03:38.182 SYMLINK libspdk_trace_parser.so 00:03:38.182 CC lib/json/json_parse.o 00:03:38.182 CC lib/json/json_util.o 00:03:38.182 CC lib/json/json_write.o 00:03:38.182 CC lib/conf/conf.o 00:03:38.182 CC lib/vmd/vmd.o 00:03:38.182 CC lib/vmd/led.o 00:03:38.182 CC lib/rdma/common.o 00:03:38.182 CC lib/env_dpdk/env.o 00:03:38.182 CC lib/rdma/rdma_verbs.o 00:03:38.182 CC lib/idxd/idxd.o 00:03:38.440 CC lib/env_dpdk/memory.o 00:03:38.440 CC lib/env_dpdk/pci.o 00:03:38.440 LIB libspdk_conf.a 00:03:38.440 CC lib/env_dpdk/init.o 00:03:38.440 SO libspdk_conf.so.6.0 00:03:38.440 LIB libspdk_json.a 00:03:38.440 LIB libspdk_rdma.a 00:03:38.440 CC lib/idxd/idxd_user.o 00:03:38.440 SYMLINK libspdk_conf.so 00:03:38.440 CC lib/env_dpdk/threads.o 00:03:38.440 SO libspdk_rdma.so.6.0 00:03:38.440 SO libspdk_json.so.6.0 00:03:38.698 SYMLINK libspdk_rdma.so 00:03:38.698 SYMLINK libspdk_json.so 00:03:38.698 CC lib/env_dpdk/pci_ioat.o 00:03:38.698 CC lib/env_dpdk/pci_virtio.o 00:03:38.698 CC lib/env_dpdk/pci_vmd.o 00:03:38.698 CC lib/env_dpdk/pci_idxd.o 00:03:38.698 CC lib/idxd/idxd_kernel.o 00:03:38.698 CC lib/env_dpdk/pci_event.o 00:03:38.698 CC lib/env_dpdk/sigbus_handler.o 00:03:38.698 CC lib/env_dpdk/pci_dpdk.o 00:03:38.698 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:38.698 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:38.956 LIB libspdk_vmd.a 00:03:38.956 SO libspdk_vmd.so.6.0 00:03:38.956 LIB libspdk_idxd.a 00:03:38.956 SYMLINK libspdk_vmd.so 00:03:38.956 SO libspdk_idxd.so.12.0 00:03:38.956 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.956 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.956 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.956 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.956 SYMLINK libspdk_idxd.so 00:03:39.214 LIB libspdk_jsonrpc.a 00:03:39.214 SO libspdk_jsonrpc.so.6.0 00:03:39.472 SYMLINK libspdk_jsonrpc.so 00:03:39.472 LIB libspdk_env_dpdk.a 00:03:39.731 SO libspdk_env_dpdk.so.14.0 00:03:39.731 CC lib/rpc/rpc.o 00:03:39.731 SYMLINK libspdk_env_dpdk.so 00:03:39.989 LIB libspdk_rpc.a 00:03:39.989 SO libspdk_rpc.so.6.0 00:03:39.989 SYMLINK libspdk_rpc.so 00:03:40.247 CC lib/trace/trace.o 00:03:40.247 CC lib/keyring/keyring.o 00:03:40.247 CC lib/trace/trace_flags.o 00:03:40.247 CC lib/trace/trace_rpc.o 00:03:40.247 CC lib/keyring/keyring_rpc.o 00:03:40.247 CC lib/notify/notify.o 00:03:40.247 CC lib/notify/notify_rpc.o 00:03:40.505 LIB libspdk_notify.a 00:03:40.505 SO libspdk_notify.so.6.0 00:03:40.505 LIB libspdk_keyring.a 00:03:40.505 LIB libspdk_trace.a 00:03:40.505 SO libspdk_keyring.so.1.0 00:03:40.505 SYMLINK libspdk_notify.so 00:03:40.505 SO libspdk_trace.so.10.0 00:03:40.764 SYMLINK libspdk_keyring.so 00:03:40.764 SYMLINK libspdk_trace.so 00:03:41.022 CC lib/sock/sock.o 00:03:41.022 CC lib/sock/sock_rpc.o 00:03:41.022 CC lib/thread/thread.o 00:03:41.022 CC lib/thread/iobuf.o 00:03:41.588 LIB libspdk_sock.a 00:03:41.588 SO libspdk_sock.so.9.0 00:03:41.588 SYMLINK libspdk_sock.so 00:03:41.847 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.847 CC lib/nvme/nvme_ctrlr.o 00:03:41.847 CC lib/nvme/nvme_fabric.o 00:03:41.847 CC lib/nvme/nvme_ns_cmd.o 00:03:41.847 CC lib/nvme/nvme_ns.o 00:03:41.847 CC lib/nvme/nvme_pcie_common.o 00:03:41.847 CC lib/nvme/nvme_pcie.o 00:03:41.847 CC lib/nvme/nvme_qpair.o 00:03:41.847 CC lib/nvme/nvme.o 00:03:42.412 LIB libspdk_thread.a 00:03:42.670 SO libspdk_thread.so.10.0 00:03:42.670 SYMLINK libspdk_thread.so 00:03:42.670 CC lib/nvme/nvme_quirks.o 00:03:42.670 CC lib/nvme/nvme_transport.o 00:03:42.670 CC lib/nvme/nvme_discovery.o 00:03:42.670 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.928 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.928 CC lib/nvme/nvme_tcp.o 00:03:42.928 CC lib/nvme/nvme_opal.o 00:03:42.928 CC lib/nvme/nvme_io_msg.o 00:03:42.928 CC lib/nvme/nvme_poll_group.o 00:03:43.495 CC lib/nvme/nvme_zns.o 00:03:43.495 CC lib/nvme/nvme_stubs.o 00:03:43.495 CC lib/accel/accel.o 00:03:43.495 CC lib/blob/blobstore.o 00:03:43.495 CC lib/init/json_config.o 00:03:43.495 CC lib/accel/accel_rpc.o 00:03:43.754 CC lib/accel/accel_sw.o 00:03:43.754 CC lib/virtio/virtio.o 00:03:43.754 CC lib/blob/request.o 00:03:43.754 CC lib/init/subsystem.o 00:03:44.012 CC lib/init/subsystem_rpc.o 00:03:44.012 CC lib/virtio/virtio_vhost_user.o 00:03:44.012 CC lib/virtio/virtio_vfio_user.o 00:03:44.012 CC lib/virtio/virtio_pci.o 00:03:44.012 CC lib/blob/zeroes.o 00:03:44.012 CC lib/nvme/nvme_auth.o 00:03:44.012 CC lib/init/rpc.o 00:03:44.270 CC lib/nvme/nvme_cuse.o 00:03:44.270 CC lib/blob/blob_bs_dev.o 00:03:44.270 CC lib/nvme/nvme_rdma.o 00:03:44.270 LIB libspdk_init.a 00:03:44.270 SO libspdk_init.so.5.0 00:03:44.270 LIB libspdk_virtio.a 00:03:44.270 SO libspdk_virtio.so.7.0 00:03:44.270 SYMLINK libspdk_init.so 00:03:44.529 SYMLINK libspdk_virtio.so 00:03:44.529 LIB libspdk_accel.a 00:03:44.529 SO libspdk_accel.so.15.0 00:03:44.529 CC lib/event/log_rpc.o 00:03:44.529 CC lib/event/app_rpc.o 00:03:44.529 CC lib/event/app.o 00:03:44.529 CC lib/event/reactor.o 00:03:44.529 CC lib/event/scheduler_static.o 00:03:44.529 SYMLINK libspdk_accel.so 00:03:44.787 CC lib/bdev/bdev.o 00:03:44.787 CC lib/bdev/bdev_rpc.o 00:03:44.787 CC lib/bdev/part.o 00:03:44.787 CC lib/bdev/bdev_zone.o 00:03:45.046 CC lib/bdev/scsi_nvme.o 00:03:45.046 LIB libspdk_event.a 00:03:45.046 SO libspdk_event.so.13.0 00:03:45.046 SYMLINK libspdk_event.so 00:03:45.612 LIB libspdk_nvme.a 00:03:45.870 SO libspdk_nvme.so.13.0 00:03:46.128 SYMLINK libspdk_nvme.so 00:03:46.696 LIB libspdk_blob.a 00:03:46.696 SO libspdk_blob.so.11.0 00:03:46.696 SYMLINK libspdk_blob.so 00:03:46.956 CC lib/lvol/lvol.o 00:03:46.956 CC lib/blobfs/blobfs.o 00:03:46.956 CC lib/blobfs/tree.o 00:03:47.524 LIB libspdk_bdev.a 00:03:47.783 SO libspdk_bdev.so.15.0 00:03:47.783 SYMLINK libspdk_bdev.so 00:03:48.041 LIB libspdk_blobfs.a 00:03:48.041 SO libspdk_blobfs.so.10.0 00:03:48.041 CC lib/scsi/dev.o 00:03:48.041 CC lib/nbd/nbd.o 00:03:48.041 CC lib/nbd/nbd_rpc.o 00:03:48.041 CC lib/nvmf/ctrlr.o 00:03:48.041 CC lib/scsi/lun.o 00:03:48.041 CC lib/ftl/ftl_core.o 00:03:48.041 CC lib/nvmf/ctrlr_discovery.o 00:03:48.041 CC lib/ublk/ublk.o 00:03:48.041 LIB libspdk_lvol.a 00:03:48.041 SO libspdk_lvol.so.10.0 00:03:48.041 SYMLINK libspdk_blobfs.so 00:03:48.041 CC lib/nvmf/ctrlr_bdev.o 00:03:48.041 SYMLINK libspdk_lvol.so 00:03:48.041 CC lib/nvmf/subsystem.o 00:03:48.300 CC lib/nvmf/nvmf.o 00:03:48.300 CC lib/scsi/port.o 00:03:48.300 CC lib/scsi/scsi.o 00:03:48.559 CC lib/ftl/ftl_init.o 00:03:48.559 LIB libspdk_nbd.a 00:03:48.559 CC lib/scsi/scsi_bdev.o 00:03:48.559 SO libspdk_nbd.so.7.0 00:03:48.559 CC lib/nvmf/nvmf_rpc.o 00:03:48.559 SYMLINK libspdk_nbd.so 00:03:48.559 CC lib/scsi/scsi_pr.o 00:03:48.559 CC lib/nvmf/transport.o 00:03:48.559 CC lib/ublk/ublk_rpc.o 00:03:48.559 CC lib/ftl/ftl_layout.o 00:03:48.817 CC lib/nvmf/tcp.o 00:03:48.817 LIB libspdk_ublk.a 00:03:48.817 SO libspdk_ublk.so.3.0 00:03:48.817 SYMLINK libspdk_ublk.so 00:03:48.817 CC lib/ftl/ftl_debug.o 00:03:48.817 CC lib/scsi/scsi_rpc.o 00:03:49.080 CC lib/ftl/ftl_io.o 00:03:49.080 CC lib/nvmf/stubs.o 00:03:49.080 CC lib/nvmf/mdns_server.o 00:03:49.080 CC lib/scsi/task.o 00:03:49.080 CC lib/ftl/ftl_sb.o 00:03:49.344 CC lib/ftl/ftl_l2p.o 00:03:49.344 CC lib/ftl/ftl_l2p_flat.o 00:03:49.344 LIB libspdk_scsi.a 00:03:49.344 CC lib/ftl/ftl_nv_cache.o 00:03:49.345 SO libspdk_scsi.so.9.0 00:03:49.345 CC lib/nvmf/rdma.o 00:03:49.345 CC lib/ftl/ftl_band.o 00:03:49.345 CC lib/nvmf/auth.o 00:03:49.345 CC lib/ftl/ftl_band_ops.o 00:03:49.602 SYMLINK libspdk_scsi.so 00:03:49.602 CC lib/ftl/ftl_writer.o 00:03:49.602 CC lib/ftl/ftl_rq.o 00:03:49.602 CC lib/ftl/ftl_reloc.o 00:03:49.602 CC lib/ftl/ftl_l2p_cache.o 00:03:49.602 CC lib/ftl/ftl_p2l.o 00:03:49.860 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.860 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.860 CC lib/iscsi/conn.o 00:03:49.860 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.119 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.119 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.119 CC lib/iscsi/init_grp.o 00:03:50.119 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.119 CC lib/iscsi/iscsi.o 00:03:50.119 CC lib/vhost/vhost.o 00:03:50.376 CC lib/iscsi/md5.o 00:03:50.377 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.377 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.377 CC lib/iscsi/param.o 00:03:50.377 CC lib/vhost/vhost_rpc.o 00:03:50.377 CC lib/iscsi/portal_grp.o 00:03:50.377 CC lib/vhost/vhost_scsi.o 00:03:50.377 CC lib/vhost/vhost_blk.o 00:03:50.635 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.635 CC lib/iscsi/tgt_node.o 00:03:50.893 CC lib/vhost/rte_vhost_user.o 00:03:50.893 CC lib/iscsi/iscsi_subsystem.o 00:03:50.893 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.893 CC lib/iscsi/iscsi_rpc.o 00:03:50.893 CC lib/iscsi/task.o 00:03:51.150 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.150 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.150 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.150 CC lib/ftl/utils/ftl_conf.o 00:03:51.150 CC lib/ftl/utils/ftl_md.o 00:03:51.408 CC lib/ftl/utils/ftl_mempool.o 00:03:51.408 LIB libspdk_nvmf.a 00:03:51.408 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.408 CC lib/ftl/utils/ftl_property.o 00:03:51.408 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.408 SO libspdk_nvmf.so.18.0 00:03:51.408 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.408 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.408 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.666 LIB libspdk_iscsi.a 00:03:51.666 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.666 SO libspdk_iscsi.so.8.0 00:03:51.666 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.666 SYMLINK libspdk_nvmf.so 00:03:51.666 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.666 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.666 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.666 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.666 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.924 CC lib/ftl/base/ftl_base_dev.o 00:03:51.924 CC lib/ftl/base/ftl_base_bdev.o 00:03:51.924 LIB libspdk_vhost.a 00:03:51.924 SYMLINK libspdk_iscsi.so 00:03:51.924 CC lib/ftl/ftl_trace.o 00:03:51.924 SO libspdk_vhost.so.8.0 00:03:51.924 SYMLINK libspdk_vhost.so 00:03:52.181 LIB libspdk_ftl.a 00:03:52.440 SO libspdk_ftl.so.9.0 00:03:52.697 SYMLINK libspdk_ftl.so 00:03:52.956 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.215 CC module/accel/dsa/accel_dsa.o 00:03:53.215 CC module/accel/error/accel_error.o 00:03:53.215 CC module/sock/posix/posix.o 00:03:53.215 CC module/keyring/linux/keyring.o 00:03:53.215 CC module/keyring/file/keyring.o 00:03:53.215 CC module/accel/ioat/accel_ioat.o 00:03:53.215 CC module/accel/iaa/accel_iaa.o 00:03:53.215 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.215 CC module/blob/bdev/blob_bdev.o 00:03:53.215 LIB libspdk_env_dpdk_rpc.a 00:03:53.215 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.215 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.215 CC module/accel/iaa/accel_iaa_rpc.o 00:03:53.215 CC module/keyring/linux/keyring_rpc.o 00:03:53.472 CC module/accel/error/accel_error_rpc.o 00:03:53.472 CC module/keyring/file/keyring_rpc.o 00:03:53.472 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.472 LIB libspdk_scheduler_dynamic.a 00:03:53.472 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.472 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.472 LIB libspdk_accel_iaa.a 00:03:53.472 LIB libspdk_blob_bdev.a 00:03:53.472 LIB libspdk_keyring_linux.a 00:03:53.472 SO libspdk_accel_iaa.so.3.0 00:03:53.472 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.472 SO libspdk_blob_bdev.so.11.0 00:03:53.472 LIB libspdk_accel_error.a 00:03:53.472 LIB libspdk_keyring_file.a 00:03:53.472 SO libspdk_keyring_linux.so.1.0 00:03:53.472 LIB libspdk_accel_ioat.a 00:03:53.472 SO libspdk_keyring_file.so.1.0 00:03:53.472 SO libspdk_accel_error.so.2.0 00:03:53.472 SYMLINK libspdk_accel_iaa.so 00:03:53.472 LIB libspdk_accel_dsa.a 00:03:53.472 SO libspdk_accel_ioat.so.6.0 00:03:53.472 SYMLINK libspdk_blob_bdev.so 00:03:53.472 CC module/sock/uring/uring.o 00:03:53.472 SYMLINK libspdk_keyring_linux.so 00:03:53.730 SO libspdk_accel_dsa.so.5.0 00:03:53.730 SYMLINK libspdk_accel_error.so 00:03:53.730 SYMLINK libspdk_keyring_file.so 00:03:53.730 SYMLINK libspdk_accel_ioat.so 00:03:53.730 SYMLINK libspdk_accel_dsa.so 00:03:53.730 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.730 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.730 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.730 CC module/bdev/lvol/vbdev_lvol.o 00:03:53.730 CC module/bdev/error/vbdev_error.o 00:03:53.730 CC module/bdev/malloc/bdev_malloc.o 00:03:53.730 CC module/bdev/delay/vbdev_delay.o 00:03:53.730 CC module/bdev/gpt/gpt.o 00:03:53.988 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.988 CC module/blobfs/bdev/blobfs_bdev.o 00:03:53.988 LIB libspdk_sock_posix.a 00:03:53.988 LIB libspdk_scheduler_gscheduler.a 00:03:53.988 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.988 SO libspdk_sock_posix.so.6.0 00:03:53.988 SO libspdk_scheduler_gscheduler.so.4.0 00:03:53.988 SYMLINK libspdk_scheduler_gscheduler.so 00:03:53.988 CC module/bdev/gpt/vbdev_gpt.o 00:03:53.988 SYMLINK libspdk_sock_posix.so 00:03:53.988 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:53.988 CC module/bdev/error/vbdev_error_rpc.o 00:03:53.988 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:53.988 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.246 CC module/bdev/null/bdev_null.o 00:03:54.246 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.246 LIB libspdk_sock_uring.a 00:03:54.246 LIB libspdk_blobfs_bdev.a 00:03:54.246 CC module/bdev/null/bdev_null_rpc.o 00:03:54.246 LIB libspdk_bdev_error.a 00:03:54.246 SO libspdk_sock_uring.so.5.0 00:03:54.246 SO libspdk_blobfs_bdev.so.6.0 00:03:54.246 LIB libspdk_bdev_gpt.a 00:03:54.246 SO libspdk_bdev_error.so.6.0 00:03:54.246 LIB libspdk_bdev_delay.a 00:03:54.246 SO libspdk_bdev_gpt.so.6.0 00:03:54.504 SYMLINK libspdk_sock_uring.so 00:03:54.504 SYMLINK libspdk_blobfs_bdev.so 00:03:54.504 SO libspdk_bdev_delay.so.6.0 00:03:54.504 SYMLINK libspdk_bdev_error.so 00:03:54.504 LIB libspdk_bdev_malloc.a 00:03:54.504 SYMLINK libspdk_bdev_gpt.so 00:03:54.504 LIB libspdk_bdev_lvol.a 00:03:54.504 SO libspdk_bdev_malloc.so.6.0 00:03:54.504 SYMLINK libspdk_bdev_delay.so 00:03:54.504 SO libspdk_bdev_lvol.so.6.0 00:03:54.504 LIB libspdk_bdev_null.a 00:03:54.504 SYMLINK libspdk_bdev_lvol.so 00:03:54.504 SYMLINK libspdk_bdev_malloc.so 00:03:54.504 SO libspdk_bdev_null.so.6.0 00:03:54.504 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.504 CC module/bdev/nvme/bdev_nvme.o 00:03:54.504 CC module/bdev/raid/bdev_raid.o 00:03:54.504 CC module/bdev/split/vbdev_split.o 00:03:54.504 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.504 CC module/bdev/uring/bdev_uring.o 00:03:54.504 SYMLINK libspdk_bdev_null.so 00:03:54.762 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.762 CC module/bdev/aio/bdev_aio.o 00:03:54.762 CC module/bdev/ftl/bdev_ftl.o 00:03:54.762 CC module/bdev/iscsi/bdev_iscsi.o 00:03:54.762 CC module/bdev/aio/bdev_aio_rpc.o 00:03:54.762 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.762 LIB libspdk_bdev_passthru.a 00:03:55.021 SO libspdk_bdev_passthru.so.6.0 00:03:55.021 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.021 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.021 SYMLINK libspdk_bdev_passthru.so 00:03:55.021 CC module/bdev/uring/bdev_uring_rpc.o 00:03:55.021 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.021 LIB libspdk_bdev_aio.a 00:03:55.021 LIB libspdk_bdev_split.a 00:03:55.021 SO libspdk_bdev_aio.so.6.0 00:03:55.021 SO libspdk_bdev_split.so.6.0 00:03:55.021 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.021 SYMLINK libspdk_bdev_aio.so 00:03:55.021 SYMLINK libspdk_bdev_split.so 00:03:55.021 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.021 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.021 LIB libspdk_bdev_zone_block.a 00:03:55.021 LIB libspdk_bdev_iscsi.a 00:03:55.021 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.021 LIB libspdk_bdev_uring.a 00:03:55.280 SO libspdk_bdev_zone_block.so.6.0 00:03:55.280 SO libspdk_bdev_iscsi.so.6.0 00:03:55.280 SO libspdk_bdev_uring.so.6.0 00:03:55.280 LIB libspdk_bdev_ftl.a 00:03:55.280 SYMLINK libspdk_bdev_zone_block.so 00:03:55.280 SYMLINK libspdk_bdev_iscsi.so 00:03:55.280 CC module/bdev/raid/raid0.o 00:03:55.280 CC module/bdev/nvme/nvme_rpc.o 00:03:55.280 SO libspdk_bdev_ftl.so.6.0 00:03:55.280 SYMLINK libspdk_bdev_uring.so 00:03:55.280 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.280 CC module/bdev/nvme/vbdev_opal.o 00:03:55.280 SYMLINK libspdk_bdev_ftl.so 00:03:55.280 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:55.280 CC module/bdev/raid/raid1.o 00:03:55.538 CC module/bdev/raid/concat.o 00:03:55.538 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:55.538 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.538 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.798 LIB libspdk_bdev_raid.a 00:03:55.798 SO libspdk_bdev_raid.so.6.0 00:03:55.798 LIB libspdk_bdev_virtio.a 00:03:55.798 SO libspdk_bdev_virtio.so.6.0 00:03:55.798 SYMLINK libspdk_bdev_raid.so 00:03:55.798 SYMLINK libspdk_bdev_virtio.so 00:03:56.733 LIB libspdk_bdev_nvme.a 00:03:56.733 SO libspdk_bdev_nvme.so.7.0 00:03:56.992 SYMLINK libspdk_bdev_nvme.so 00:03:57.559 CC module/event/subsystems/vmd/vmd.o 00:03:57.559 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.559 CC module/event/subsystems/keyring/keyring.o 00:03:57.559 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.559 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.559 CC module/event/subsystems/sock/sock.o 00:03:57.559 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.559 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.559 LIB libspdk_event_scheduler.a 00:03:57.559 LIB libspdk_event_keyring.a 00:03:57.559 LIB libspdk_event_vhost_blk.a 00:03:57.559 LIB libspdk_event_sock.a 00:03:57.559 LIB libspdk_event_vmd.a 00:03:57.559 SO libspdk_event_scheduler.so.4.0 00:03:57.559 SO libspdk_event_keyring.so.1.0 00:03:57.559 LIB libspdk_event_iobuf.a 00:03:57.559 SO libspdk_event_vhost_blk.so.3.0 00:03:57.559 SO libspdk_event_sock.so.5.0 00:03:57.817 SO libspdk_event_vmd.so.6.0 00:03:57.817 SYMLINK libspdk_event_scheduler.so 00:03:57.817 SO libspdk_event_iobuf.so.3.0 00:03:57.817 SYMLINK libspdk_event_keyring.so 00:03:57.817 SYMLINK libspdk_event_sock.so 00:03:57.817 SYMLINK libspdk_event_vhost_blk.so 00:03:57.817 SYMLINK libspdk_event_vmd.so 00:03:57.817 SYMLINK libspdk_event_iobuf.so 00:03:58.076 CC module/event/subsystems/accel/accel.o 00:03:58.335 LIB libspdk_event_accel.a 00:03:58.335 SO libspdk_event_accel.so.6.0 00:03:58.335 SYMLINK libspdk_event_accel.so 00:03:58.593 CC module/event/subsystems/bdev/bdev.o 00:03:58.853 LIB libspdk_event_bdev.a 00:03:58.853 SO libspdk_event_bdev.so.6.0 00:03:59.112 SYMLINK libspdk_event_bdev.so 00:03:59.112 CC module/event/subsystems/scsi/scsi.o 00:03:59.112 CC module/event/subsystems/nbd/nbd.o 00:03:59.112 CC module/event/subsystems/ublk/ublk.o 00:03:59.112 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.112 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:59.371 LIB libspdk_event_nbd.a 00:03:59.371 LIB libspdk_event_ublk.a 00:03:59.371 LIB libspdk_event_scsi.a 00:03:59.371 SO libspdk_event_nbd.so.6.0 00:03:59.371 SO libspdk_event_ublk.so.3.0 00:03:59.371 SO libspdk_event_scsi.so.6.0 00:03:59.371 SYMLINK libspdk_event_nbd.so 00:03:59.371 SYMLINK libspdk_event_ublk.so 00:03:59.629 SYMLINK libspdk_event_scsi.so 00:03:59.629 LIB libspdk_event_nvmf.a 00:03:59.629 SO libspdk_event_nvmf.so.6.0 00:03:59.629 SYMLINK libspdk_event_nvmf.so 00:03:59.887 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.887 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.887 LIB libspdk_event_vhost_scsi.a 00:03:59.887 LIB libspdk_event_iscsi.a 00:04:00.145 SO libspdk_event_vhost_scsi.so.3.0 00:04:00.145 SO libspdk_event_iscsi.so.6.0 00:04:00.145 SYMLINK libspdk_event_vhost_scsi.so 00:04:00.145 SYMLINK libspdk_event_iscsi.so 00:04:00.145 SO libspdk.so.6.0 00:04:00.145 SYMLINK libspdk.so 00:04:00.412 CXX app/trace/trace.o 00:04:00.412 TEST_HEADER include/spdk/accel.h 00:04:00.412 CC app/trace_record/trace_record.o 00:04:00.412 TEST_HEADER include/spdk/accel_module.h 00:04:00.412 TEST_HEADER include/spdk/assert.h 00:04:00.412 TEST_HEADER include/spdk/barrier.h 00:04:00.413 TEST_HEADER include/spdk/base64.h 00:04:00.413 TEST_HEADER include/spdk/bdev.h 00:04:00.413 TEST_HEADER include/spdk/bdev_module.h 00:04:00.413 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.413 TEST_HEADER include/spdk/bit_array.h 00:04:00.413 TEST_HEADER include/spdk/bit_pool.h 00:04:00.413 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.413 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.413 TEST_HEADER include/spdk/blobfs.h 00:04:00.413 TEST_HEADER include/spdk/blob.h 00:04:00.413 TEST_HEADER include/spdk/conf.h 00:04:00.413 TEST_HEADER include/spdk/config.h 00:04:00.413 TEST_HEADER include/spdk/cpuset.h 00:04:00.672 TEST_HEADER include/spdk/crc16.h 00:04:00.672 TEST_HEADER include/spdk/crc32.h 00:04:00.672 TEST_HEADER include/spdk/crc64.h 00:04:00.672 TEST_HEADER include/spdk/dif.h 00:04:00.672 TEST_HEADER include/spdk/dma.h 00:04:00.672 TEST_HEADER include/spdk/endian.h 00:04:00.672 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.672 TEST_HEADER include/spdk/env.h 00:04:00.672 TEST_HEADER include/spdk/event.h 00:04:00.672 CC app/nvmf_tgt/nvmf_main.o 00:04:00.672 TEST_HEADER include/spdk/fd_group.h 00:04:00.672 TEST_HEADER include/spdk/fd.h 00:04:00.672 TEST_HEADER include/spdk/file.h 00:04:00.672 TEST_HEADER include/spdk/ftl.h 00:04:00.672 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.672 TEST_HEADER include/spdk/hexlify.h 00:04:00.672 TEST_HEADER include/spdk/histogram_data.h 00:04:00.672 TEST_HEADER include/spdk/idxd.h 00:04:00.672 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.672 TEST_HEADER include/spdk/init.h 00:04:00.672 TEST_HEADER include/spdk/ioat.h 00:04:00.672 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.672 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.672 TEST_HEADER include/spdk/json.h 00:04:00.672 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.672 TEST_HEADER include/spdk/keyring.h 00:04:00.672 CC examples/accel/perf/accel_perf.o 00:04:00.672 TEST_HEADER include/spdk/keyring_module.h 00:04:00.672 TEST_HEADER include/spdk/likely.h 00:04:00.672 TEST_HEADER include/spdk/log.h 00:04:00.672 TEST_HEADER include/spdk/lvol.h 00:04:00.672 TEST_HEADER include/spdk/memory.h 00:04:00.672 TEST_HEADER include/spdk/mmio.h 00:04:00.672 TEST_HEADER include/spdk/nbd.h 00:04:00.672 TEST_HEADER include/spdk/notify.h 00:04:00.672 TEST_HEADER include/spdk/nvme.h 00:04:00.672 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.672 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.672 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.672 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.672 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.672 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.672 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.672 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.672 CC test/accel/dif/dif.o 00:04:00.672 TEST_HEADER include/spdk/nvmf.h 00:04:00.672 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.672 CC test/blobfs/mkfs/mkfs.o 00:04:00.672 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.672 TEST_HEADER include/spdk/opal.h 00:04:00.672 CC test/bdev/bdevio/bdevio.o 00:04:00.672 TEST_HEADER include/spdk/opal_spec.h 00:04:00.672 TEST_HEADER include/spdk/pci_ids.h 00:04:00.672 CC test/app/bdev_svc/bdev_svc.o 00:04:00.672 TEST_HEADER include/spdk/pipe.h 00:04:00.672 TEST_HEADER include/spdk/queue.h 00:04:00.672 TEST_HEADER include/spdk/reduce.h 00:04:00.672 TEST_HEADER include/spdk/rpc.h 00:04:00.672 TEST_HEADER include/spdk/scheduler.h 00:04:00.672 TEST_HEADER include/spdk/scsi.h 00:04:00.672 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.672 TEST_HEADER include/spdk/sock.h 00:04:00.672 TEST_HEADER include/spdk/stdinc.h 00:04:00.672 TEST_HEADER include/spdk/string.h 00:04:00.672 TEST_HEADER include/spdk/thread.h 00:04:00.672 TEST_HEADER include/spdk/trace.h 00:04:00.672 TEST_HEADER include/spdk/trace_parser.h 00:04:00.672 TEST_HEADER include/spdk/tree.h 00:04:00.672 TEST_HEADER include/spdk/ublk.h 00:04:00.672 TEST_HEADER include/spdk/util.h 00:04:00.672 TEST_HEADER include/spdk/uuid.h 00:04:00.672 TEST_HEADER include/spdk/version.h 00:04:00.672 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.672 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.672 TEST_HEADER include/spdk/vhost.h 00:04:00.672 TEST_HEADER include/spdk/vmd.h 00:04:00.672 TEST_HEADER include/spdk/xor.h 00:04:00.672 TEST_HEADER include/spdk/zipf.h 00:04:00.672 CXX test/cpp_headers/accel.o 00:04:00.933 LINK spdk_trace_record 00:04:00.933 LINK nvmf_tgt 00:04:00.933 LINK bdev_svc 00:04:00.933 LINK mkfs 00:04:00.933 CXX test/cpp_headers/accel_module.o 00:04:00.933 LINK hello_bdev 00:04:00.933 LINK spdk_trace 00:04:01.195 LINK bdevio 00:04:01.195 LINK dif 00:04:01.195 CXX test/cpp_headers/assert.o 00:04:01.195 LINK accel_perf 00:04:01.195 CC examples/blob/hello_world/hello_blob.o 00:04:01.195 CC test/app/histogram_perf/histogram_perf.o 00:04:01.195 CC examples/ioat/perf/perf.o 00:04:01.195 CXX test/cpp_headers/barrier.o 00:04:01.195 CC examples/bdev/bdevperf/bdevperf.o 00:04:01.454 CXX test/cpp_headers/base64.o 00:04:01.454 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.454 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.454 CXX test/cpp_headers/bdev.o 00:04:01.454 LINK histogram_perf 00:04:01.454 LINK ioat_perf 00:04:01.454 LINK hello_blob 00:04:01.454 CC app/spdk_tgt/spdk_tgt.o 00:04:01.454 CXX test/cpp_headers/bdev_module.o 00:04:01.454 CC examples/ioat/verify/verify.o 00:04:01.454 CXX test/cpp_headers/bdev_zone.o 00:04:01.454 LINK iscsi_tgt 00:04:01.712 CC examples/blob/cli/blobcli.o 00:04:01.712 CXX test/cpp_headers/bit_array.o 00:04:01.712 CXX test/cpp_headers/bit_pool.o 00:04:01.712 LINK spdk_tgt 00:04:01.712 LINK nvme_fuzz 00:04:01.712 CXX test/cpp_headers/blob_bdev.o 00:04:01.712 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.712 CXX test/cpp_headers/blobfs.o 00:04:01.712 LINK verify 00:04:01.712 CXX test/cpp_headers/blob.o 00:04:01.970 CXX test/cpp_headers/conf.o 00:04:01.970 CXX test/cpp_headers/config.o 00:04:01.970 CXX test/cpp_headers/cpuset.o 00:04:01.970 CXX test/cpp_headers/crc16.o 00:04:01.970 CXX test/cpp_headers/crc32.o 00:04:01.970 CXX test/cpp_headers/crc64.o 00:04:01.970 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.970 LINK bdevperf 00:04:01.970 CC app/spdk_lspci/spdk_lspci.o 00:04:01.970 CC app/spdk_nvme_perf/perf.o 00:04:02.229 CXX test/cpp_headers/dif.o 00:04:02.229 CXX test/cpp_headers/dma.o 00:04:02.229 CC test/app/jsoncat/jsoncat.o 00:04:02.229 LINK blobcli 00:04:02.229 LINK spdk_lspci 00:04:02.229 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.229 CXX test/cpp_headers/endian.o 00:04:02.229 CXX test/cpp_headers/env_dpdk.o 00:04:02.229 LINK jsoncat 00:04:02.487 CC app/spdk_nvme_identify/identify.o 00:04:02.487 CC examples/nvme/hello_world/hello_world.o 00:04:02.487 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.487 CXX test/cpp_headers/env.o 00:04:02.487 CC examples/sock/hello_world/hello_sock.o 00:04:02.487 CC examples/vmd/led/led.o 00:04:02.487 CC examples/nvme/reconnect/reconnect.o 00:04:02.487 CC examples/vmd/lsvmd/lsvmd.o 00:04:02.745 LINK hello_world 00:04:02.745 CXX test/cpp_headers/event.o 00:04:02.745 LINK lsvmd 00:04:02.745 LINK led 00:04:02.745 CXX test/cpp_headers/fd_group.o 00:04:02.745 LINK hello_sock 00:04:02.745 LINK vhost_fuzz 00:04:03.002 CXX test/cpp_headers/fd.o 00:04:03.002 LINK spdk_nvme_perf 00:04:03.002 CXX test/cpp_headers/file.o 00:04:03.002 LINK reconnect 00:04:03.002 CXX test/cpp_headers/ftl.o 00:04:03.002 CC examples/nvmf/nvmf/nvmf.o 00:04:03.002 CXX test/cpp_headers/gpt_spec.o 00:04:03.002 CXX test/cpp_headers/hexlify.o 00:04:03.002 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.002 CC app/spdk_top/spdk_top.o 00:04:03.261 CXX test/cpp_headers/histogram_data.o 00:04:03.261 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.261 LINK spdk_nvme_identify 00:04:03.261 LINK spdk_nvme_discover 00:04:03.261 CC test/dma/test_dma/test_dma.o 00:04:03.261 LINK nvmf 00:04:03.519 CXX test/cpp_headers/idxd.o 00:04:03.519 CC app/vhost/vhost.o 00:04:03.519 CC examples/util/zipf/zipf.o 00:04:03.519 CXX test/cpp_headers/idxd_spec.o 00:04:03.519 CC examples/nvme/arbitration/arbitration.o 00:04:03.519 LINK vhost 00:04:03.519 LINK iscsi_fuzz 00:04:03.519 LINK zipf 00:04:03.519 CXX test/cpp_headers/init.o 00:04:03.777 CC examples/nvme/hotplug/hotplug.o 00:04:03.777 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.777 LINK test_dma 00:04:03.777 LINK nvme_manage 00:04:03.777 CXX test/cpp_headers/ioat.o 00:04:03.777 CC examples/nvme/abort/abort.o 00:04:03.777 CXX test/cpp_headers/ioat_spec.o 00:04:03.777 LINK cmb_copy 00:04:03.777 CXX test/cpp_headers/iscsi_spec.o 00:04:04.064 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:04.064 LINK hotplug 00:04:04.064 LINK arbitration 00:04:04.064 CC test/app/stub/stub.o 00:04:04.064 LINK spdk_top 00:04:04.064 CXX test/cpp_headers/json.o 00:04:04.064 CXX test/cpp_headers/jsonrpc.o 00:04:04.064 CXX test/cpp_headers/keyring.o 00:04:04.064 CXX test/cpp_headers/keyring_module.o 00:04:04.064 CC app/spdk_dd/spdk_dd.o 00:04:04.064 CXX test/cpp_headers/likely.o 00:04:04.064 LINK pmr_persistence 00:04:04.064 LINK stub 00:04:04.323 LINK abort 00:04:04.323 CXX test/cpp_headers/log.o 00:04:04.323 CXX test/cpp_headers/lvol.o 00:04:04.323 CC app/fio/nvme/fio_plugin.o 00:04:04.323 CC app/fio/bdev/fio_plugin.o 00:04:04.323 CC examples/thread/thread/thread_ex.o 00:04:04.323 CC examples/idxd/perf/perf.o 00:04:04.581 CC test/event/event_perf/event_perf.o 00:04:04.581 CXX test/cpp_headers/memory.o 00:04:04.582 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:04.582 CC test/env/vtophys/vtophys.o 00:04:04.582 LINK spdk_dd 00:04:04.582 CC test/env/mem_callbacks/mem_callbacks.o 00:04:04.582 LINK event_perf 00:04:04.582 CXX test/cpp_headers/mmio.o 00:04:04.582 LINK vtophys 00:04:04.582 LINK env_dpdk_post_init 00:04:04.840 LINK thread 00:04:04.840 LINK mem_callbacks 00:04:04.840 CXX test/cpp_headers/nbd.o 00:04:04.840 LINK idxd_perf 00:04:04.840 CXX test/cpp_headers/notify.o 00:04:04.840 LINK spdk_bdev 00:04:04.840 CC test/event/reactor/reactor.o 00:04:04.840 LINK spdk_nvme 00:04:04.840 CC test/event/reactor_perf/reactor_perf.o 00:04:04.840 CC test/env/memory/memory_ut.o 00:04:04.840 CXX test/cpp_headers/nvme.o 00:04:05.098 CC test/env/pci/pci_ut.o 00:04:05.098 CXX test/cpp_headers/nvme_intel.o 00:04:05.098 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:05.098 LINK reactor 00:04:05.098 LINK reactor_perf 00:04:05.098 CC test/event/app_repeat/app_repeat.o 00:04:05.098 CXX test/cpp_headers/nvme_ocssd.o 00:04:05.098 CC test/nvme/aer/aer.o 00:04:05.098 LINK interrupt_tgt 00:04:05.099 CC test/lvol/esnap/esnap.o 00:04:05.358 CC test/nvme/reset/reset.o 00:04:05.358 LINK app_repeat 00:04:05.358 CC test/nvme/sgl/sgl.o 00:04:05.358 CC test/event/scheduler/scheduler.o 00:04:05.358 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:05.358 LINK pci_ut 00:04:05.358 LINK aer 00:04:05.616 CC test/nvme/e2edp/nvme_dp.o 00:04:05.616 CC test/nvme/overhead/overhead.o 00:04:05.616 LINK reset 00:04:05.616 LINK scheduler 00:04:05.616 CXX test/cpp_headers/nvme_spec.o 00:04:05.616 LINK sgl 00:04:05.616 CXX test/cpp_headers/nvme_zns.o 00:04:05.616 LINK memory_ut 00:04:05.616 CXX test/cpp_headers/nvmf_cmd.o 00:04:05.874 LINK nvme_dp 00:04:05.874 CC test/nvme/err_injection/err_injection.o 00:04:05.874 LINK overhead 00:04:05.874 CC test/nvme/startup/startup.o 00:04:05.874 CC test/nvme/reserve/reserve.o 00:04:05.874 CC test/nvme/simple_copy/simple_copy.o 00:04:05.874 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:05.874 CC test/nvme/connect_stress/connect_stress.o 00:04:05.874 CC test/nvme/boot_partition/boot_partition.o 00:04:06.133 LINK err_injection 00:04:06.133 LINK startup 00:04:06.133 CC test/nvme/compliance/nvme_compliance.o 00:04:06.133 LINK reserve 00:04:06.133 CC test/nvme/fused_ordering/fused_ordering.o 00:04:06.133 CXX test/cpp_headers/nvmf.o 00:04:06.133 LINK simple_copy 00:04:06.133 LINK connect_stress 00:04:06.133 LINK boot_partition 00:04:06.133 CXX test/cpp_headers/nvmf_spec.o 00:04:06.392 LINK fused_ordering 00:04:06.392 CXX test/cpp_headers/nvmf_transport.o 00:04:06.392 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:06.392 CC test/rpc_client/rpc_client_test.o 00:04:06.392 CC test/nvme/fdp/fdp.o 00:04:06.392 CC test/nvme/cuse/cuse.o 00:04:06.392 CXX test/cpp_headers/opal.o 00:04:06.392 LINK nvme_compliance 00:04:06.392 CXX test/cpp_headers/opal_spec.o 00:04:06.392 CC test/thread/poller_perf/poller_perf.o 00:04:06.650 CXX test/cpp_headers/pci_ids.o 00:04:06.650 LINK doorbell_aers 00:04:06.650 LINK rpc_client_test 00:04:06.650 CXX test/cpp_headers/pipe.o 00:04:06.650 CXX test/cpp_headers/queue.o 00:04:06.650 CXX test/cpp_headers/reduce.o 00:04:06.650 CXX test/cpp_headers/rpc.o 00:04:06.650 LINK poller_perf 00:04:06.650 CXX test/cpp_headers/scheduler.o 00:04:06.650 CXX test/cpp_headers/scsi.o 00:04:06.650 CXX test/cpp_headers/scsi_spec.o 00:04:06.650 LINK fdp 00:04:06.650 CXX test/cpp_headers/sock.o 00:04:06.909 CXX test/cpp_headers/stdinc.o 00:04:06.909 CXX test/cpp_headers/string.o 00:04:06.909 CXX test/cpp_headers/thread.o 00:04:06.909 CXX test/cpp_headers/trace.o 00:04:06.909 CXX test/cpp_headers/trace_parser.o 00:04:06.909 CXX test/cpp_headers/tree.o 00:04:06.909 CXX test/cpp_headers/ublk.o 00:04:06.909 CXX test/cpp_headers/util.o 00:04:06.909 CXX test/cpp_headers/uuid.o 00:04:06.909 CXX test/cpp_headers/version.o 00:04:06.909 CXX test/cpp_headers/vfio_user_pci.o 00:04:06.909 CXX test/cpp_headers/vfio_user_spec.o 00:04:07.167 CXX test/cpp_headers/vhost.o 00:04:07.167 CXX test/cpp_headers/vmd.o 00:04:07.167 CXX test/cpp_headers/xor.o 00:04:07.167 CXX test/cpp_headers/zipf.o 00:04:07.733 LINK cuse 00:04:10.261 LINK esnap 00:04:10.577 00:04:10.577 real 0m57.650s 00:04:10.577 user 5m3.247s 00:04:10.577 sys 1m8.269s 00:04:10.577 21:44:16 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:10.577 21:44:16 make -- common/autotest_common.sh@10 -- $ set +x 00:04:10.577 ************************************ 00:04:10.577 END TEST make 00:04:10.577 ************************************ 00:04:10.577 21:44:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:10.577 21:44:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:10.577 21:44:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:10.577 21:44:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.577 21:44:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:10.577 21:44:16 -- pm/common@44 -- $ pid=5877 00:04:10.577 21:44:16 -- pm/common@50 -- $ kill -TERM 5877 00:04:10.577 21:44:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.577 21:44:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:10.577 21:44:16 -- pm/common@44 -- $ pid=5878 00:04:10.577 21:44:16 -- pm/common@50 -- $ kill -TERM 5878 00:04:10.850 21:44:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:10.850 21:44:16 -- nvmf/common.sh@7 -- # uname -s 00:04:10.850 21:44:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.850 21:44:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.850 21:44:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.850 21:44:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.850 21:44:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.850 21:44:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.850 21:44:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.850 21:44:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.850 21:44:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.850 21:44:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.850 21:44:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:04:10.850 21:44:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:04:10.850 21:44:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.850 21:44:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.850 21:44:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:10.850 21:44:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.850 21:44:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:10.850 21:44:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.850 21:44:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.850 21:44:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.850 21:44:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.850 21:44:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.850 21:44:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.850 21:44:16 -- paths/export.sh@5 -- # export PATH 00:04:10.850 21:44:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.850 21:44:16 -- nvmf/common.sh@47 -- # : 0 00:04:10.850 21:44:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.850 21:44:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.850 21:44:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.850 21:44:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.850 21:44:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.850 21:44:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.850 21:44:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.850 21:44:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.850 21:44:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:10.850 21:44:16 -- spdk/autotest.sh@32 -- # uname -s 00:04:10.850 21:44:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:10.850 21:44:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:10.850 21:44:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.850 21:44:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:10.850 21:44:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.850 21:44:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:10.850 21:44:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:10.850 21:44:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:10.850 21:44:16 -- spdk/autotest.sh@48 -- # udevadm_pid=64857 00:04:10.850 21:44:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:10.850 21:44:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:10.850 21:44:16 -- pm/common@17 -- # local monitor 00:04:10.850 21:44:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.850 21:44:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.850 21:44:16 -- pm/common@25 -- # sleep 1 00:04:10.850 21:44:16 -- pm/common@21 -- # date +%s 00:04:10.851 21:44:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721857456 00:04:10.851 21:44:16 -- pm/common@21 -- # date +%s 00:04:10.851 21:44:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721857456 00:04:10.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721857456_collect-cpu-load.pm.log 00:04:10.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721857456_collect-vmstat.pm.log 00:04:11.787 21:44:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:11.787 21:44:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:11.787 21:44:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.787 21:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:11.787 21:44:17 -- spdk/autotest.sh@59 -- # create_test_list 00:04:11.787 21:44:17 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:11.787 21:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:11.787 21:44:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:11.787 21:44:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:11.787 21:44:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:11.787 21:44:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:11.787 21:44:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:11.787 21:44:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:11.787 21:44:17 -- common/autotest_common.sh@1451 -- # uname 00:04:11.787 21:44:17 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:11.787 21:44:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:11.787 21:44:17 -- common/autotest_common.sh@1471 -- # uname 00:04:11.787 21:44:17 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:11.787 21:44:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:11.787 21:44:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:11.787 21:44:17 -- spdk/autotest.sh@72 -- # hash lcov 00:04:11.787 21:44:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:11.787 21:44:17 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:11.787 --rc lcov_branch_coverage=1 00:04:11.787 --rc lcov_function_coverage=1 00:04:11.787 --rc genhtml_branch_coverage=1 00:04:11.787 --rc genhtml_function_coverage=1 00:04:11.787 --rc genhtml_legend=1 00:04:11.787 --rc geninfo_all_blocks=1 00:04:11.787 ' 00:04:11.787 21:44:17 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:11.787 --rc lcov_branch_coverage=1 00:04:11.787 --rc lcov_function_coverage=1 00:04:11.787 --rc genhtml_branch_coverage=1 00:04:11.787 --rc genhtml_function_coverage=1 00:04:11.787 --rc genhtml_legend=1 00:04:11.787 --rc geninfo_all_blocks=1 00:04:11.787 ' 00:04:11.787 21:44:17 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:11.787 --rc lcov_branch_coverage=1 00:04:11.787 --rc lcov_function_coverage=1 00:04:11.787 --rc genhtml_branch_coverage=1 00:04:11.787 --rc genhtml_function_coverage=1 00:04:11.787 --rc genhtml_legend=1 00:04:11.787 --rc geninfo_all_blocks=1 00:04:11.787 --no-external' 00:04:11.787 21:44:17 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:11.787 --rc lcov_branch_coverage=1 00:04:11.787 --rc lcov_function_coverage=1 00:04:11.787 --rc genhtml_branch_coverage=1 00:04:11.787 --rc genhtml_function_coverage=1 00:04:11.787 --rc genhtml_legend=1 00:04:11.787 --rc geninfo_all_blocks=1 00:04:11.787 --no-external' 00:04:11.787 21:44:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:12.046 lcov: LCOV version 1.14 00:04:12.046 21:44:17 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:26.925 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:39.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:39.129 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:39.130 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:39.130 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:39.131 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:39.131 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:39.131 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:39.131 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:39.131 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:42.419 21:44:47 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:42.420 21:44:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:42.420 21:44:47 -- common/autotest_common.sh@10 -- # set +x 00:04:42.420 21:44:47 -- spdk/autotest.sh@91 -- # rm -f 00:04:42.420 21:44:47 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.027 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:43.027 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:43.027 21:44:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:43.027 21:44:48 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:43.027 21:44:48 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:43.027 21:44:48 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:43.027 21:44:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.027 21:44:48 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:43.027 21:44:48 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:43.027 21:44:48 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.027 21:44:48 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:43.027 21:44:48 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:43.027 21:44:48 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.027 21:44:48 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:43.027 21:44:48 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:43.027 21:44:48 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.027 21:44:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.027 21:44:48 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:43.027 21:44:48 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:43.027 21:44:48 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:43.028 21:44:48 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.028 21:44:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:43.028 21:44:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.028 21:44:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.028 21:44:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:43.028 21:44:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:43.028 21:44:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.028 No valid GPT data, bailing 00:04:43.028 21:44:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # pt= 00:04:43.320 21:44:48 -- scripts/common.sh@392 -- # return 1 00:04:43.320 21:44:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:43.320 1+0 records in 00:04:43.320 1+0 records out 00:04:43.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519461 s, 202 MB/s 00:04:43.320 21:44:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.320 21:44:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.320 21:44:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:43.320 21:44:48 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:43.320 21:44:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:43.320 No valid GPT data, bailing 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # pt= 00:04:43.320 21:44:48 -- scripts/common.sh@392 -- # return 1 00:04:43.320 21:44:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:43.320 1+0 records in 00:04:43.320 1+0 records out 00:04:43.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423183 s, 248 MB/s 00:04:43.320 21:44:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.320 21:44:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.320 21:44:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:43.320 21:44:48 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:43.320 21:44:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:43.320 No valid GPT data, bailing 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # pt= 00:04:43.320 21:44:48 -- scripts/common.sh@392 -- # return 1 00:04:43.320 21:44:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:43.320 1+0 records in 00:04:43.320 1+0 records out 00:04:43.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459207 s, 228 MB/s 00:04:43.320 21:44:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.320 21:44:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.320 21:44:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:43.320 21:44:48 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:43.320 21:44:48 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:43.320 No valid GPT data, bailing 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:43.320 21:44:48 -- scripts/common.sh@391 -- # pt= 00:04:43.320 21:44:48 -- scripts/common.sh@392 -- # return 1 00:04:43.320 21:44:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:43.320 1+0 records in 00:04:43.320 1+0 records out 00:04:43.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435142 s, 241 MB/s 00:04:43.320 21:44:48 -- spdk/autotest.sh@118 -- # sync 00:04:43.320 21:44:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:43.320 21:44:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:43.320 21:44:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:45.220 21:44:50 -- spdk/autotest.sh@124 -- # uname -s 00:04:45.220 21:44:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:45.220 21:44:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:45.220 21:44:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.220 21:44:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.220 21:44:50 -- common/autotest_common.sh@10 -- # set +x 00:04:45.220 ************************************ 00:04:45.220 START TEST setup.sh 00:04:45.220 ************************************ 00:04:45.220 21:44:50 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:45.220 * Looking for test storage... 00:04:45.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.220 21:44:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:45.220 21:44:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:45.220 21:44:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:45.220 21:44:50 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.220 21:44:50 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.220 21:44:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.220 ************************************ 00:04:45.220 START TEST acl 00:04:45.220 ************************************ 00:04:45.220 21:44:50 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:45.478 * Looking for test storage... 00:04:45.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:45.478 21:44:50 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:45.478 21:44:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:45.478 21:44:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.478 21:44:50 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.044 21:44:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:46.044 21:44:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:46.044 21:44:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.044 21:44:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:46.044 21:44:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.044 21:44:51 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.999 Hugepages 00:04:46.999 node hugesize free / total 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.999 00:04:46.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:46.999 21:44:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:47.000 21:44:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:47.000 21:44:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:47.000 21:44:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:47.000 21:44:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:47.000 21:44:52 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.000 21:44:52 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.000 21:44:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.000 ************************************ 00:04:47.000 START TEST denied 00:04:47.000 ************************************ 00:04:47.000 21:44:52 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:47.000 21:44:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:47.000 21:44:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:47.000 21:44:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:47.000 21:44:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.000 21:44:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:47.933 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.933 21:44:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.498 00:04:48.498 real 0m1.511s 00:04:48.498 user 0m0.612s 00:04:48.498 sys 0m0.834s 00:04:48.498 21:44:54 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.498 21:44:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:48.498 ************************************ 00:04:48.498 END TEST denied 00:04:48.498 ************************************ 00:04:48.498 21:44:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:48.498 21:44:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.498 21:44:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.498 21:44:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:48.498 ************************************ 00:04:48.498 START TEST allowed 00:04:48.498 ************************************ 00:04:48.498 21:44:54 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:48.498 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:48.498 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:48.498 21:44:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.755 21:44:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.755 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:49.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.320 21:44:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.253 00:04:50.253 real 0m1.507s 00:04:50.253 user 0m0.674s 00:04:50.253 sys 0m0.824s 00:04:50.253 21:44:55 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.253 ************************************ 00:04:50.253 END TEST allowed 00:04:50.253 ************************************ 00:04:50.253 21:44:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:50.253 ************************************ 00:04:50.253 END TEST acl 00:04:50.253 ************************************ 00:04:50.253 00:04:50.253 real 0m4.861s 00:04:50.253 user 0m2.130s 00:04:50.253 sys 0m2.651s 00:04:50.253 21:44:55 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.253 21:44:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:50.253 21:44:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:50.253 21:44:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.253 21:44:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.253 21:44:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.253 ************************************ 00:04:50.253 START TEST hugepages 00:04:50.253 ************************************ 00:04:50.253 21:44:55 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:50.253 * Looking for test storage... 00:04:50.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.253 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4882216 kB' 'MemAvailable: 7398092 kB' 'Buffers: 2436 kB' 'Cached: 2720864 kB' 'SwapCached: 0 kB' 'Active: 434888 kB' 'Inactive: 2391908 kB' 'Active(anon): 113988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 105364 kB' 'Mapped: 48920 kB' 'Shmem: 10492 kB' 'KReclaimable: 80016 kB' 'Slab: 157616 kB' 'SReclaimable: 80016 kB' 'SUnreclaim: 77600 kB' 'KernelStack: 6364 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.254 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.255 21:44:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:50.255 21:44:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.255 21:44:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.255 21:44:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.255 ************************************ 00:04:50.255 START TEST default_setup 00:04:50.255 ************************************ 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.255 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.256 21:44:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.212 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.212 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6983036 kB' 'MemAvailable: 9498796 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451804 kB' 'Inactive: 2391912 kB' 'Active(anon): 130904 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121856 kB' 'Mapped: 48872 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157412 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77636 kB' 'KernelStack: 6464 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.212 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.213 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6982788 kB' 'MemAvailable: 9498552 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451120 kB' 'Inactive: 2391916 kB' 'Active(anon): 130220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121460 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157292 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6416 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.214 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.215 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6982788 kB' 'MemAvailable: 9498552 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451116 kB' 'Inactive: 2391916 kB' 'Active(anon): 130216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121452 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157292 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6416 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.216 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.217 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:51.218 nr_hugepages=1024 00:04:51.218 resv_hugepages=0 00:04:51.218 surplus_hugepages=0 00:04:51.218 anon_hugepages=0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6982788 kB' 'MemAvailable: 9498552 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451424 kB' 'Inactive: 2391916 kB' 'Active(anon): 130524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121532 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157292 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6400 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.218 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.220 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6982788 kB' 'MemUsed: 5259184 kB' 'SwapCached: 0 kB' 'Active: 451344 kB' 'Inactive: 2391916 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 2723296 kB' 'Mapped: 48704 kB' 'AnonPages: 121648 kB' 'Shmem: 10468 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157296 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.479 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:51.480 node0=1024 expecting 1024 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.480 00:04:51.480 real 0m1.008s 00:04:51.480 user 0m0.460s 00:04:51.480 sys 0m0.471s 00:04:51.480 ************************************ 00:04:51.480 END TEST default_setup 00:04:51.480 ************************************ 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.480 21:44:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:51.480 21:44:56 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:51.480 21:44:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.480 21:44:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.480 21:44:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.480 ************************************ 00:04:51.480 START TEST per_node_1G_alloc 00:04:51.480 ************************************ 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:51.480 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:51.481 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.481 21:44:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.742 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.742 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8027184 kB' 'MemAvailable: 10542960 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451784 kB' 'Inactive: 2391928 kB' 'Active(anon): 130884 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121768 kB' 'Mapped: 49024 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157312 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6352 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.742 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.743 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8026932 kB' 'MemAvailable: 10542708 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451360 kB' 'Inactive: 2391928 kB' 'Active(anon): 130460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121844 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157336 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77560 kB' 'KernelStack: 6356 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.744 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.745 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8026684 kB' 'MemAvailable: 10542460 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451280 kB' 'Inactive: 2391928 kB' 'Active(anon): 130380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121768 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157332 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77556 kB' 'KernelStack: 6384 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.746 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.747 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.008 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:52.009 nr_hugepages=512 00:04:52.009 resv_hugepages=0 00:04:52.009 surplus_hugepages=0 00:04:52.009 anon_hugepages=0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8026684 kB' 'MemAvailable: 10542460 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451468 kB' 'Inactive: 2391928 kB' 'Active(anon): 130568 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157332 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77556 kB' 'KernelStack: 6384 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.009 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:52.010 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8026432 kB' 'MemUsed: 4215540 kB' 'SwapCached: 0 kB' 'Active: 451152 kB' 'Inactive: 2391928 kB' 'Active(anon): 130252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 2723296 kB' 'Mapped: 48704 kB' 'AnonPages: 121632 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157332 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.011 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.012 node0=512 expecting 512 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.012 00:04:52.012 real 0m0.541s 00:04:52.012 user 0m0.271s 00:04:52.012 sys 0m0.276s 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.012 21:44:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 ************************************ 00:04:52.012 END TEST per_node_1G_alloc 00:04:52.012 ************************************ 00:04:52.012 21:44:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.012 21:44:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.012 21:44:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.012 21:44:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.012 ************************************ 00:04:52.012 START TEST even_2G_alloc 00:04:52.012 ************************************ 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:52.012 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.013 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.272 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.272 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979552 kB' 'MemAvailable: 9495328 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451360 kB' 'Inactive: 2391928 kB' 'Active(anon): 130460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121788 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157372 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77596 kB' 'KernelStack: 6360 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.272 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.273 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979656 kB' 'MemAvailable: 9495432 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451244 kB' 'Inactive: 2391928 kB' 'Active(anon): 130344 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121724 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157376 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77600 kB' 'KernelStack: 6412 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.536 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.537 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979656 kB' 'MemAvailable: 9495432 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451248 kB' 'Inactive: 2391928 kB' 'Active(anon): 130348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121712 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6384 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.538 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.539 nr_hugepages=1024 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.539 resv_hugepages=0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.539 surplus_hugepages=0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.539 anon_hugepages=0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.539 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979656 kB' 'MemAvailable: 9495432 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451264 kB' 'Inactive: 2391928 kB' 'Active(anon): 130364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6400 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.540 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.541 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979656 kB' 'MemUsed: 5262316 kB' 'SwapCached: 0 kB' 'Active: 451200 kB' 'Inactive: 2391928 kB' 'Active(anon): 130300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2723296 kB' 'Mapped: 48712 kB' 'AnonPages: 121456 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157360 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.542 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.543 node0=1024 expecting 1024 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.543 00:04:52.543 real 0m0.511s 00:04:52.543 user 0m0.258s 00:04:52.543 sys 0m0.284s 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.543 21:44:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.543 ************************************ 00:04:52.543 END TEST even_2G_alloc 00:04:52.543 ************************************ 00:04:52.543 21:44:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.543 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.543 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.543 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.543 ************************************ 00:04:52.543 START TEST odd_alloc 00:04:52.543 ************************************ 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.543 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.801 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:52.801 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.064 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973440 kB' 'MemAvailable: 9489216 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451640 kB' 'Inactive: 2391928 kB' 'Active(anon): 130740 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121584 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6400 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.065 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973440 kB' 'MemAvailable: 9489216 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451592 kB' 'Inactive: 2391928 kB' 'Active(anon): 130692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121796 kB' 'Mapped: 48720 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157364 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77588 kB' 'KernelStack: 6368 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.066 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.067 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973440 kB' 'MemAvailable: 9489216 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451512 kB' 'Inactive: 2391928 kB' 'Active(anon): 130612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121760 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.068 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.069 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.070 nr_hugepages=1025 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:53.070 resv_hugepages=0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.070 surplus_hugepages=0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.070 anon_hugepages=0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973440 kB' 'MemAvailable: 9489216 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451520 kB' 'Inactive: 2391928 kB' 'Active(anon): 130620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121760 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.070 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.071 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6973440 kB' 'MemUsed: 5268532 kB' 'SwapCached: 0 kB' 'Active: 451464 kB' 'Inactive: 2391928 kB' 'Active(anon): 130564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2723296 kB' 'Mapped: 48712 kB' 'AnonPages: 121724 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157368 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.072 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.073 node0=1025 expecting 1025 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:53.073 00:04:53.073 real 0m0.568s 00:04:53.073 user 0m0.262s 00:04:53.073 sys 0m0.315s 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.073 21:44:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.073 ************************************ 00:04:53.073 END TEST odd_alloc 00:04:53.073 ************************************ 00:04:53.073 21:44:58 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:53.073 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.073 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.073 21:44:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.073 ************************************ 00:04:53.073 START TEST custom_alloc 00:04:53.073 ************************************ 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:53.073 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.074 21:44:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.645 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.645 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.645 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8030196 kB' 'MemAvailable: 10545972 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451868 kB' 'Inactive: 2391928 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122116 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157344 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77568 kB' 'KernelStack: 6372 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.646 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8030196 kB' 'MemAvailable: 10545972 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451368 kB' 'Inactive: 2391928 kB' 'Active(anon): 130468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121832 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157344 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77568 kB' 'KernelStack: 6392 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.647 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.648 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8030196 kB' 'MemAvailable: 10545972 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451392 kB' 'Inactive: 2391928 kB' 'Active(anon): 130492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121872 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157344 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77568 kB' 'KernelStack: 6392 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.649 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.650 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.651 nr_hugepages=512 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:53.651 resv_hugepages=0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.651 surplus_hugepages=0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.651 anon_hugepages=0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8030196 kB' 'MemAvailable: 10545972 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451228 kB' 'Inactive: 2391928 kB' 'Active(anon): 130328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157344 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77568 kB' 'KernelStack: 6384 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.651 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.652 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8030196 kB' 'MemUsed: 4211776 kB' 'SwapCached: 0 kB' 'Active: 451416 kB' 'Inactive: 2391928 kB' 'Active(anon): 130516 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2723296 kB' 'Mapped: 48704 kB' 'AnonPages: 121992 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157344 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.653 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.654 node0=512 expecting 512 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.654 00:04:53.654 real 0m0.526s 00:04:53.654 user 0m0.246s 00:04:53.654 sys 0m0.313s 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.654 21:44:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.654 ************************************ 00:04:53.654 END TEST custom_alloc 00:04:53.654 ************************************ 00:04:53.654 21:44:59 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:53.654 21:44:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.654 21:44:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.654 21:44:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.654 ************************************ 00:04:53.654 START TEST no_shrink_alloc 00:04:53.654 ************************************ 00:04:53.654 21:44:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:53.654 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:53.654 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.654 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.655 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.231 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.231 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975268 kB' 'MemAvailable: 9491044 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451692 kB' 'Inactive: 2391928 kB' 'Active(anon): 130792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122124 kB' 'Mapped: 48776 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157304 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77528 kB' 'KernelStack: 6340 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.231 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.232 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975268 kB' 'MemAvailable: 9491044 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451292 kB' 'Inactive: 2391928 kB' 'Active(anon): 130392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121776 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157312 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6400 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.233 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.234 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975268 kB' 'MemAvailable: 9491044 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451384 kB' 'Inactive: 2391928 kB' 'Active(anon): 130484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121436 kB' 'Mapped: 48964 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157312 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6432 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.235 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.236 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.237 nr_hugepages=1024 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.237 resv_hugepages=0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.237 surplus_hugepages=0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.237 anon_hugepages=0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975268 kB' 'MemAvailable: 9491048 kB' 'Buffers: 2436 kB' 'Cached: 2720864 kB' 'SwapCached: 0 kB' 'Active: 451296 kB' 'Inactive: 2391932 kB' 'Active(anon): 130396 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 121568 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157276 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77500 kB' 'KernelStack: 6384 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.237 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.238 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975268 kB' 'MemUsed: 5266704 kB' 'SwapCached: 0 kB' 'Active: 451260 kB' 'Inactive: 2391932 kB' 'Active(anon): 130360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2723300 kB' 'Mapped: 48904 kB' 'AnonPages: 121792 kB' 'Shmem: 10468 kB' 'KernelStack: 6368 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157276 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.239 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.240 node0=1024 expecting 1024 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.240 21:44:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.499 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.499 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.499 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.499 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975216 kB' 'MemAvailable: 9490996 kB' 'Buffers: 2436 kB' 'Cached: 2720864 kB' 'SwapCached: 0 kB' 'Active: 451988 kB' 'Inactive: 2391932 kB' 'Active(anon): 131088 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122472 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157304 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77528 kB' 'KernelStack: 6432 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.500 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.762 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.762 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.762 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.762 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.762 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.763 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975216 kB' 'MemAvailable: 9490996 kB' 'Buffers: 2436 kB' 'Cached: 2720864 kB' 'SwapCached: 0 kB' 'Active: 451472 kB' 'Inactive: 2391932 kB' 'Active(anon): 130572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121964 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157296 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77520 kB' 'KernelStack: 6400 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.764 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.765 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975216 kB' 'MemAvailable: 9490992 kB' 'Buffers: 2436 kB' 'Cached: 2720860 kB' 'SwapCached: 0 kB' 'Active: 451264 kB' 'Inactive: 2391928 kB' 'Active(anon): 130364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391928 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121588 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157292 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6368 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.766 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.767 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.768 nr_hugepages=1024 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.768 resv_hugepages=0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.768 surplus_hugepages=0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.768 anon_hugepages=0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975216 kB' 'MemAvailable: 9490996 kB' 'Buffers: 2436 kB' 'Cached: 2720864 kB' 'SwapCached: 0 kB' 'Active: 451376 kB' 'Inactive: 2391932 kB' 'Active(anon): 130476 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121632 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157292 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.768 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.769 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6975216 kB' 'MemUsed: 5266756 kB' 'SwapCached: 0 kB' 'Active: 451236 kB' 'Inactive: 2391932 kB' 'Active(anon): 130336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2391932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2723300 kB' 'Mapped: 48704 kB' 'AnonPages: 121768 kB' 'Shmem: 10468 kB' 'KernelStack: 6384 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157288 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.770 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.771 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.771 node0=1024 expecting 1024 00:04:54.772 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.772 21:45:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.772 00:04:54.772 real 0m1.058s 00:04:54.772 user 0m0.529s 00:04:54.772 sys 0m0.564s 00:04:54.772 21:45:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.772 21:45:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.772 ************************************ 00:04:54.772 END TEST no_shrink_alloc 00:04:54.772 ************************************ 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.772 21:45:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.772 00:04:54.772 real 0m4.640s 00:04:54.772 user 0m2.186s 00:04:54.772 sys 0m2.470s 00:04:54.772 21:45:00 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.772 21:45:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.772 ************************************ 00:04:54.772 END TEST hugepages 00:04:54.772 ************************************ 00:04:55.030 21:45:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.030 21:45:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.030 21:45:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.030 21:45:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.030 ************************************ 00:04:55.030 START TEST driver 00:04:55.030 ************************************ 00:04:55.030 21:45:00 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.030 * Looking for test storage... 00:04:55.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.030 21:45:00 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:55.030 21:45:00 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.030 21:45:00 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.596 21:45:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:55.596 21:45:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.596 21:45:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.596 21:45:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:55.596 ************************************ 00:04:55.596 START TEST guess_driver 00:04:55.596 ************************************ 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:55.596 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:55.596 Looking for driver=uio_pci_generic 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.596 21:45:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.162 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:56.162 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:56.162 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.420 21:45:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.986 00:04:56.986 real 0m1.461s 00:04:56.986 user 0m0.566s 00:04:56.986 sys 0m0.896s 00:04:56.986 21:45:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.987 ************************************ 00:04:56.987 21:45:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:56.987 END TEST guess_driver 00:04:56.987 ************************************ 00:04:56.987 00:04:56.987 real 0m2.128s 00:04:56.987 user 0m0.796s 00:04:56.987 sys 0m1.386s 00:04:56.987 21:45:02 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.987 21:45:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:56.987 ************************************ 00:04:56.987 END TEST driver 00:04:56.987 ************************************ 00:04:56.987 21:45:02 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:56.987 21:45:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.987 21:45:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.987 21:45:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.987 ************************************ 00:04:56.987 START TEST devices 00:04:56.987 ************************************ 00:04:56.987 21:45:02 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.245 * Looking for test storage... 00:04:57.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.245 21:45:02 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:57.245 21:45:02 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:57.245 21:45:02 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.245 21:45:02 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.881 21:45:03 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:57.881 No valid GPT data, bailing 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:57.881 21:45:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:57.881 21:45:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:57.881 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:57.881 21:45:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:58.140 No valid GPT data, bailing 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:58.140 No valid GPT data, bailing 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:58.140 No valid GPT data, bailing 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:58.140 21:45:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:58.140 21:45:03 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:58.140 21:45:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.140 21:45:03 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.140 21:45:03 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.140 21:45:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.140 ************************************ 00:04:58.140 START TEST nvme_mount 00:04:58.140 ************************************ 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.140 21:45:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:59.514 Creating new GPT entries in memory. 00:04:59.514 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.514 other utilities. 00:04:59.514 21:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.514 21:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.514 21:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.514 21:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.514 21:45:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:00.449 Creating new GPT entries in memory. 00:05:00.449 The operation has completed successfully. 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 69065 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.449 21:45:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.449 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.708 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.708 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.966 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:00.966 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:00.966 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.966 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.966 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:00.966 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:00.966 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.966 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:00.966 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.224 21:45:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.482 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.740 21:45:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.999 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.258 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.258 00:05:02.258 real 0m3.974s 00:05:02.258 user 0m0.674s 00:05:02.258 sys 0m0.989s 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.258 21:45:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:02.258 ************************************ 00:05:02.258 END TEST nvme_mount 00:05:02.258 ************************************ 00:05:02.258 21:45:07 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:02.258 21:45:07 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.258 21:45:07 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.258 21:45:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.258 ************************************ 00:05:02.258 START TEST dm_mount 00:05:02.258 ************************************ 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:02.258 21:45:07 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:03.191 Creating new GPT entries in memory. 00:05:03.191 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.191 other utilities. 00:05:03.191 21:45:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.191 21:45:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.191 21:45:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.191 21:45:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.191 21:45:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:04.566 Creating new GPT entries in memory. 00:05:04.566 The operation has completed successfully. 00:05:04.566 21:45:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.566 21:45:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.566 21:45:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.566 21:45:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.566 21:45:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:05.500 The operation has completed successfully. 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 69498 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:05.500 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.501 21:45:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:05.501 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:05.760 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.019 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:06.277 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:06.277 00:05:06.277 real 0m4.161s 00:05:06.277 user 0m0.451s 00:05:06.277 sys 0m0.681s 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.277 ************************************ 00:05:06.277 END TEST dm_mount 00:05:06.277 21:45:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:06.277 ************************************ 00:05:06.535 21:45:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:06.535 21:45:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:06.535 21:45:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.535 21:45:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.535 21:45:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.536 21:45:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.536 21:45:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.794 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:06.794 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:06.794 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:06.794 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.794 21:45:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:06.794 00:05:06.794 real 0m9.655s 00:05:06.794 user 0m1.775s 00:05:06.794 sys 0m2.264s 00:05:06.794 21:45:12 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.794 ************************************ 00:05:06.794 21:45:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 END TEST devices 00:05:06.794 ************************************ 00:05:06.794 00:05:06.794 real 0m21.559s 00:05:06.794 user 0m6.979s 00:05:06.794 sys 0m8.946s 00:05:06.794 21:45:12 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.794 21:45:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 ************************************ 00:05:06.794 END TEST setup.sh 00:05:06.794 ************************************ 00:05:06.794 21:45:12 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.361 Hugepages 00:05:07.361 node hugesize free / total 00:05:07.361 node0 1048576kB 0 / 0 00:05:07.361 node0 2048kB 2048 / 2048 00:05:07.361 00:05:07.361 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.619 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.619 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:07.619 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:07.619 21:45:13 -- spdk/autotest.sh@130 -- # uname -s 00:05:07.619 21:45:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:07.619 21:45:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:07.619 21:45:13 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.442 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.442 21:45:14 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:09.376 21:45:15 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:09.376 21:45:15 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:09.376 21:45:15 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.376 21:45:15 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:09.376 21:45:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:09.376 21:45:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:09.376 21:45:15 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.376 21:45:15 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.376 21:45:15 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:09.634 21:45:15 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:09.634 21:45:15 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:09.634 21:45:15 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.892 Waiting for block devices as requested 00:05:09.892 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.892 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.149 21:45:15 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.149 21:45:15 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.149 21:45:15 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.149 21:45:15 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.149 21:45:15 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1553 -- # continue 00:05:10.149 21:45:15 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:10.149 21:45:15 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:10.149 21:45:15 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:10.149 21:45:15 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:10.149 21:45:15 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:10.149 21:45:15 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:10.149 21:45:15 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:10.149 21:45:15 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:10.150 21:45:15 -- common/autotest_common.sh@1553 -- # continue 00:05:10.150 21:45:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:10.150 21:45:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.150 21:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:10.150 21:45:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:10.150 21:45:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.150 21:45:15 -- common/autotest_common.sh@10 -- # set +x 00:05:10.150 21:45:15 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.974 21:45:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:10.974 21:45:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.974 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:10.974 21:45:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:10.974 21:45:16 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:10.974 21:45:16 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.974 21:45:16 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:10.974 21:45:16 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:10.974 21:45:16 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:10.974 21:45:16 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:10.974 21:45:16 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:10.974 21:45:16 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.974 21:45:16 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.974 21:45:16 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:11.232 21:45:16 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:11.232 21:45:16 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:11.232 21:45:16 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.232 21:45:16 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:11.232 21:45:16 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:11.232 21:45:16 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.232 21:45:16 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:11.232 21:45:16 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:11.232 21:45:16 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:11.232 21:45:16 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:11.232 21:45:16 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:11.232 21:45:16 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:11.232 21:45:16 -- common/autotest_common.sh@1589 -- # return 0 00:05:11.232 21:45:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:11.232 21:45:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:11.232 21:45:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:11.232 21:45:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:11.232 21:45:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:11.232 21:45:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:11.232 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.232 21:45:16 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:11.232 21:45:16 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:11.232 21:45:16 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:11.232 21:45:16 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:11.232 21:45:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.232 21:45:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.232 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.232 ************************************ 00:05:11.232 START TEST env 00:05:11.232 ************************************ 00:05:11.232 21:45:16 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:11.232 * Looking for test storage... 00:05:11.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:11.232 21:45:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.232 21:45:16 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.232 21:45:16 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.232 21:45:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.232 ************************************ 00:05:11.232 START TEST env_memory 00:05:11.232 ************************************ 00:05:11.232 21:45:16 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.232 00:05:11.232 00:05:11.232 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.232 http://cunit.sourceforge.net/ 00:05:11.232 00:05:11.232 00:05:11.232 Suite: memory 00:05:11.233 Test: alloc and free memory map ...[2024-07-24 21:45:16.867095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:11.233 passed 00:05:11.233 Test: mem map translation ...[2024-07-24 21:45:16.894795] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:11.233 [2024-07-24 21:45:16.895121] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:11.233 [2024-07-24 21:45:16.895287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:11.233 [2024-07-24 21:45:16.895470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.233 passed 00:05:11.233 Test: mem map registration ...[2024-07-24 21:45:16.946171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:11.233 [2024-07-24 21:45:16.946473] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:11.491 passed 00:05:11.491 Test: mem map adjacent registrations ...passed 00:05:11.491 00:05:11.491 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.491 suites 1 1 n/a 0 0 00:05:11.491 tests 4 4 4 0 0 00:05:11.491 asserts 152 152 152 0 n/a 00:05:11.491 00:05:11.491 Elapsed time = 0.171 seconds 00:05:11.491 00:05:11.491 real 0m0.188s 00:05:11.491 user 0m0.171s 00:05:11.491 sys 0m0.013s 00:05:11.491 21:45:17 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.491 21:45:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 END TEST env_memory 00:05:11.491 ************************************ 00:05:11.491 21:45:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.491 21:45:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.491 21:45:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.491 21:45:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 START TEST env_vtophys 00:05:11.491 ************************************ 00:05:11.491 21:45:17 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.491 EAL: lib.eal log level changed from notice to debug 00:05:11.491 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 1 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 2 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 3 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 4 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 5 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 6 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 7 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 8 as core 0 on socket 0 00:05:11.491 EAL: Detected lcore 9 as core 0 on socket 0 00:05:11.491 EAL: Maximum logical cores by configuration: 128 00:05:11.491 EAL: Detected CPU lcores: 10 00:05:11.491 EAL: Detected NUMA nodes: 1 00:05:11.491 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:11.491 EAL: Detected shared linkage of DPDK 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:11.491 EAL: Registered [vdev] bus. 00:05:11.491 EAL: bus.vdev log level changed from disabled to notice 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:11.491 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:11.491 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:11.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:11.491 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: Selected IOVA mode 'PA' 00:05:11.491 EAL: Probing VFIO support... 00:05:11.491 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.491 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:11.491 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.491 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.491 EAL: Setting up physically contiguous memory... 00:05:11.491 EAL: Setting maximum number of open files to 524288 00:05:11.491 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.491 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.491 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.491 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.491 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.491 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.491 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.491 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.491 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.491 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.491 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.491 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.491 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.491 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.491 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.491 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.491 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.491 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.491 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.491 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.491 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.491 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.491 EAL: Hugepages will be freed exactly as allocated. 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: TSC frequency is ~2200000 KHz 00:05:11.750 EAL: Main lcore 0 is ready (tid=7f83c84c7a00;cpuset=[0]) 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 0 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.750 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.750 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.750 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:11.750 00:05:11.750 00:05:11.750 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.750 http://cunit.sourceforge.net/ 00:05:11.750 00:05:11.750 00:05:11.750 Suite: components_suite 00:05:11.750 Test: vtophys_malloc_test ...passed 00:05:11.750 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.750 EAL: Trying to obtain current memory policy. 00:05:11.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.750 EAL: Restoring previous memory policy: 4 00:05:11.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.750 EAL: request: mp_malloc_sync 00:05:11.750 EAL: No shared files mode enabled, IPC is disabled 00:05:11.750 EAL: Heap on socket 0 was expanded by 258MB 00:05:12.009 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.009 EAL: request: mp_malloc_sync 00:05:12.009 EAL: No shared files mode enabled, IPC is disabled 00:05:12.009 EAL: Heap on socket 0 was shrunk by 258MB 00:05:12.009 EAL: Trying to obtain current memory policy. 00:05:12.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.009 EAL: Restoring previous memory policy: 4 00:05:12.009 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.009 EAL: request: mp_malloc_sync 00:05:12.009 EAL: No shared files mode enabled, IPC is disabled 00:05:12.009 EAL: Heap on socket 0 was expanded by 514MB 00:05:12.267 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.267 EAL: request: mp_malloc_sync 00:05:12.267 EAL: No shared files mode enabled, IPC is disabled 00:05:12.267 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.267 EAL: Trying to obtain current memory policy. 00:05:12.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.525 EAL: Restoring previous memory policy: 4 00:05:12.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.525 EAL: request: mp_malloc_sync 00:05:12.525 EAL: No shared files mode enabled, IPC is disabled 00:05:12.525 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.040 passed 00:05:13.040 00:05:13.040 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.040 suites 1 1 n/a 0 0 00:05:13.040 tests 2 2 2 0 0 00:05:13.040 asserts 5246 5246 5246 0 n/a 00:05:13.040 00:05:13.040 Elapsed time = 1.294 seconds 00:05:13.040 EAL: request: mp_malloc_sync 00:05:13.040 EAL: No shared files mode enabled, IPC is disabled 00:05:13.040 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:13.040 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.040 EAL: request: mp_malloc_sync 00:05:13.040 EAL: No shared files mode enabled, IPC is disabled 00:05:13.040 EAL: Heap on socket 0 was shrunk by 2MB 00:05:13.040 EAL: No shared files mode enabled, IPC is disabled 00:05:13.040 EAL: No shared files mode enabled, IPC is disabled 00:05:13.040 EAL: No shared files mode enabled, IPC is disabled 00:05:13.040 00:05:13.040 real 0m1.491s 00:05:13.040 user 0m0.808s 00:05:13.040 sys 0m0.548s 00:05:13.040 21:45:18 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.040 ************************************ 00:05:13.040 END TEST env_vtophys 00:05:13.040 ************************************ 00:05:13.040 21:45:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:13.040 21:45:18 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:13.040 21:45:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.040 21:45:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.040 21:45:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.040 ************************************ 00:05:13.040 START TEST env_pci 00:05:13.040 ************************************ 00:05:13.040 21:45:18 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:13.040 00:05:13.040 00:05:13.040 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.040 http://cunit.sourceforge.net/ 00:05:13.040 00:05:13.040 00:05:13.040 Suite: pci 00:05:13.040 Test: pci_hook ...[2024-07-24 21:45:18.622508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70685 has claimed it 00:05:13.040 passed 00:05:13.040 00:05:13.040 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.040 suites 1 1 n/a 0 0 00:05:13.041 tests 1 1 1 0 0 00:05:13.041 asserts 25 25 25 0 n/a 00:05:13.041 00:05:13.041 Elapsed time = 0.002 seconds 00:05:13.041 EAL: Cannot find device (10000:00:01.0) 00:05:13.041 EAL: Failed to attach device on primary process 00:05:13.041 00:05:13.041 real 0m0.020s 00:05:13.041 user 0m0.011s 00:05:13.041 sys 0m0.009s 00:05:13.041 ************************************ 00:05:13.041 END TEST env_pci 00:05:13.041 ************************************ 00:05:13.041 21:45:18 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.041 21:45:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:13.041 21:45:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:13.041 21:45:18 env -- env/env.sh@15 -- # uname 00:05:13.041 21:45:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:13.041 21:45:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:13.041 21:45:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.041 21:45:18 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:13.041 21:45:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.041 21:45:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.041 ************************************ 00:05:13.041 START TEST env_dpdk_post_init 00:05:13.041 ************************************ 00:05:13.041 21:45:18 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.041 EAL: Detected CPU lcores: 10 00:05:13.041 EAL: Detected NUMA nodes: 1 00:05:13.041 EAL: Detected shared linkage of DPDK 00:05:13.041 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.041 EAL: Selected IOVA mode 'PA' 00:05:13.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:13.299 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:13.299 Starting DPDK initialization... 00:05:13.299 Starting SPDK post initialization... 00:05:13.299 SPDK NVMe probe 00:05:13.299 Attaching to 0000:00:10.0 00:05:13.299 Attaching to 0000:00:11.0 00:05:13.299 Attached to 0000:00:10.0 00:05:13.299 Attached to 0000:00:11.0 00:05:13.299 Cleaning up... 00:05:13.299 00:05:13.299 real 0m0.175s 00:05:13.299 user 0m0.039s 00:05:13.299 sys 0m0.035s 00:05:13.299 ************************************ 00:05:13.299 END TEST env_dpdk_post_init 00:05:13.299 ************************************ 00:05:13.299 21:45:18 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.299 21:45:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.299 21:45:18 env -- env/env.sh@26 -- # uname 00:05:13.299 21:45:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.299 21:45:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.299 21:45:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.299 21:45:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.299 21:45:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.299 ************************************ 00:05:13.299 START TEST env_mem_callbacks 00:05:13.299 ************************************ 00:05:13.299 21:45:18 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.299 EAL: Detected CPU lcores: 10 00:05:13.299 EAL: Detected NUMA nodes: 1 00:05:13.299 EAL: Detected shared linkage of DPDK 00:05:13.299 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.299 EAL: Selected IOVA mode 'PA' 00:05:13.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.557 00:05:13.557 00:05:13.557 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.557 http://cunit.sourceforge.net/ 00:05:13.557 00:05:13.557 00:05:13.557 Suite: memory 00:05:13.557 Test: test ... 00:05:13.557 register 0x200000200000 2097152 00:05:13.557 malloc 3145728 00:05:13.557 register 0x200000400000 4194304 00:05:13.557 buf 0x200000500000 len 3145728 PASSED 00:05:13.557 malloc 64 00:05:13.557 buf 0x2000004fff40 len 64 PASSED 00:05:13.557 malloc 4194304 00:05:13.557 register 0x200000800000 6291456 00:05:13.557 buf 0x200000a00000 len 4194304 PASSED 00:05:13.557 free 0x200000500000 3145728 00:05:13.557 free 0x2000004fff40 64 00:05:13.557 unregister 0x200000400000 4194304 PASSED 00:05:13.557 free 0x200000a00000 4194304 00:05:13.557 unregister 0x200000800000 6291456 PASSED 00:05:13.557 malloc 8388608 00:05:13.557 register 0x200000400000 10485760 00:05:13.557 buf 0x200000600000 len 8388608 PASSED 00:05:13.557 free 0x200000600000 8388608 00:05:13.557 unregister 0x200000400000 10485760 PASSED 00:05:13.557 passed 00:05:13.557 00:05:13.557 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.557 suites 1 1 n/a 0 0 00:05:13.557 tests 1 1 1 0 0 00:05:13.557 asserts 15 15 15 0 n/a 00:05:13.557 00:05:13.557 Elapsed time = 0.009 seconds 00:05:13.557 00:05:13.557 real 0m0.141s 00:05:13.557 user 0m0.011s 00:05:13.557 sys 0m0.028s 00:05:13.557 21:45:19 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.557 21:45:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.557 ************************************ 00:05:13.557 END TEST env_mem_callbacks 00:05:13.557 ************************************ 00:05:13.557 00:05:13.557 real 0m2.348s 00:05:13.557 user 0m1.161s 00:05:13.557 sys 0m0.830s 00:05:13.557 21:45:19 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.557 21:45:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.557 ************************************ 00:05:13.557 END TEST env 00:05:13.557 ************************************ 00:05:13.557 21:45:19 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.557 21:45:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.557 21:45:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.557 21:45:19 -- common/autotest_common.sh@10 -- # set +x 00:05:13.557 ************************************ 00:05:13.557 START TEST rpc 00:05:13.557 ************************************ 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.557 * Looking for test storage... 00:05:13.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.557 21:45:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70795 00:05:13.557 21:45:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.557 21:45:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.557 21:45:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70795 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@827 -- # '[' -z 70795 ']' 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.557 21:45:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.557 [2024-07-24 21:45:19.263922] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:13.557 [2024-07-24 21:45:19.264020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70795 ] 00:05:13.815 [2024-07-24 21:45:19.399344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.815 [2024-07-24 21:45:19.496870] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.815 [2024-07-24 21:45:19.496934] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70795' to capture a snapshot of events at runtime. 00:05:13.815 [2024-07-24 21:45:19.496946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.815 [2024-07-24 21:45:19.496955] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.815 [2024-07-24 21:45:19.496963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70795 for offline analysis/debug. 00:05:13.815 [2024-07-24 21:45:19.496999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.074 [2024-07-24 21:45:19.550549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.639 21:45:20 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.639 21:45:20 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:14.639 21:45:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.639 21:45:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.639 21:45:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.639 21:45:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.639 21:45:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.639 21:45:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.639 21:45:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.639 ************************************ 00:05:14.639 START TEST rpc_integrity 00:05:14.639 ************************************ 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:14.639 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.639 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.639 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.639 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.639 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.639 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.897 { 00:05:14.897 "name": "Malloc0", 00:05:14.897 "aliases": [ 00:05:14.897 "02680841-627a-41a9-af2a-90f267e4f1f9" 00:05:14.897 ], 00:05:14.897 "product_name": "Malloc disk", 00:05:14.897 "block_size": 512, 00:05:14.897 "num_blocks": 16384, 00:05:14.897 "uuid": "02680841-627a-41a9-af2a-90f267e4f1f9", 00:05:14.897 "assigned_rate_limits": { 00:05:14.897 "rw_ios_per_sec": 0, 00:05:14.897 "rw_mbytes_per_sec": 0, 00:05:14.897 "r_mbytes_per_sec": 0, 00:05:14.897 "w_mbytes_per_sec": 0 00:05:14.897 }, 00:05:14.897 "claimed": false, 00:05:14.897 "zoned": false, 00:05:14.897 "supported_io_types": { 00:05:14.897 "read": true, 00:05:14.897 "write": true, 00:05:14.897 "unmap": true, 00:05:14.897 "write_zeroes": true, 00:05:14.897 "flush": true, 00:05:14.897 "reset": true, 00:05:14.897 "compare": false, 00:05:14.897 "compare_and_write": false, 00:05:14.897 "abort": true, 00:05:14.897 "nvme_admin": false, 00:05:14.897 "nvme_io": false 00:05:14.897 }, 00:05:14.897 "memory_domains": [ 00:05:14.897 { 00:05:14.897 "dma_device_id": "system", 00:05:14.897 "dma_device_type": 1 00:05:14.897 }, 00:05:14.897 { 00:05:14.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.897 "dma_device_type": 2 00:05:14.897 } 00:05:14.897 ], 00:05:14.897 "driver_specific": {} 00:05:14.897 } 00:05:14.897 ]' 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 [2024-07-24 21:45:20.443975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.897 [2024-07-24 21:45:20.444036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.897 [2024-07-24 21:45:20.444057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcc460 00:05:14.897 [2024-07-24 21:45:20.444068] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.897 [2024-07-24 21:45:20.445941] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.897 [2024-07-24 21:45:20.445979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.897 Passthru0 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.897 { 00:05:14.897 "name": "Malloc0", 00:05:14.897 "aliases": [ 00:05:14.897 "02680841-627a-41a9-af2a-90f267e4f1f9" 00:05:14.897 ], 00:05:14.897 "product_name": "Malloc disk", 00:05:14.897 "block_size": 512, 00:05:14.897 "num_blocks": 16384, 00:05:14.897 "uuid": "02680841-627a-41a9-af2a-90f267e4f1f9", 00:05:14.897 "assigned_rate_limits": { 00:05:14.897 "rw_ios_per_sec": 0, 00:05:14.897 "rw_mbytes_per_sec": 0, 00:05:14.897 "r_mbytes_per_sec": 0, 00:05:14.897 "w_mbytes_per_sec": 0 00:05:14.897 }, 00:05:14.897 "claimed": true, 00:05:14.897 "claim_type": "exclusive_write", 00:05:14.897 "zoned": false, 00:05:14.897 "supported_io_types": { 00:05:14.897 "read": true, 00:05:14.897 "write": true, 00:05:14.897 "unmap": true, 00:05:14.897 "write_zeroes": true, 00:05:14.897 "flush": true, 00:05:14.897 "reset": true, 00:05:14.897 "compare": false, 00:05:14.897 "compare_and_write": false, 00:05:14.897 "abort": true, 00:05:14.897 "nvme_admin": false, 00:05:14.897 "nvme_io": false 00:05:14.897 }, 00:05:14.897 "memory_domains": [ 00:05:14.897 { 00:05:14.897 "dma_device_id": "system", 00:05:14.897 "dma_device_type": 1 00:05:14.897 }, 00:05:14.897 { 00:05:14.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.897 "dma_device_type": 2 00:05:14.897 } 00:05:14.897 ], 00:05:14.897 "driver_specific": {} 00:05:14.897 }, 00:05:14.897 { 00:05:14.897 "name": "Passthru0", 00:05:14.897 "aliases": [ 00:05:14.897 "1ff53fda-3153-53e4-8d3b-48356d0c170a" 00:05:14.897 ], 00:05:14.897 "product_name": "passthru", 00:05:14.897 "block_size": 512, 00:05:14.897 "num_blocks": 16384, 00:05:14.897 "uuid": "1ff53fda-3153-53e4-8d3b-48356d0c170a", 00:05:14.897 "assigned_rate_limits": { 00:05:14.897 "rw_ios_per_sec": 0, 00:05:14.897 "rw_mbytes_per_sec": 0, 00:05:14.897 "r_mbytes_per_sec": 0, 00:05:14.897 "w_mbytes_per_sec": 0 00:05:14.897 }, 00:05:14.897 "claimed": false, 00:05:14.897 "zoned": false, 00:05:14.897 "supported_io_types": { 00:05:14.897 "read": true, 00:05:14.897 "write": true, 00:05:14.897 "unmap": true, 00:05:14.897 "write_zeroes": true, 00:05:14.897 "flush": true, 00:05:14.897 "reset": true, 00:05:14.897 "compare": false, 00:05:14.897 "compare_and_write": false, 00:05:14.897 "abort": true, 00:05:14.897 "nvme_admin": false, 00:05:14.897 "nvme_io": false 00:05:14.897 }, 00:05:14.897 "memory_domains": [ 00:05:14.897 { 00:05:14.897 "dma_device_id": "system", 00:05:14.897 "dma_device_type": 1 00:05:14.897 }, 00:05:14.897 { 00:05:14.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.897 "dma_device_type": 2 00:05:14.897 } 00:05:14.897 ], 00:05:14.897 "driver_specific": { 00:05:14.897 "passthru": { 00:05:14.897 "name": "Passthru0", 00:05:14.897 "base_bdev_name": "Malloc0" 00:05:14.897 } 00:05:14.897 } 00:05:14.897 } 00:05:14.897 ]' 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.897 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.898 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.898 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.898 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.898 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.898 21:45:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.898 00:05:14.898 real 0m0.318s 00:05:14.898 user 0m0.212s 00:05:14.898 sys 0m0.034s 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.898 21:45:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.898 ************************************ 00:05:14.898 END TEST rpc_integrity 00:05:14.898 ************************************ 00:05:15.156 21:45:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:15.156 21:45:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.156 21:45:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.156 21:45:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.156 ************************************ 00:05:15.156 START TEST rpc_plugins 00:05:15.156 ************************************ 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:15.156 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.156 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.156 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.156 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.156 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.156 { 00:05:15.156 "name": "Malloc1", 00:05:15.156 "aliases": [ 00:05:15.156 "dea80437-28cc-433c-b866-94c96babc7f7" 00:05:15.156 ], 00:05:15.156 "product_name": "Malloc disk", 00:05:15.156 "block_size": 4096, 00:05:15.156 "num_blocks": 256, 00:05:15.156 "uuid": "dea80437-28cc-433c-b866-94c96babc7f7", 00:05:15.156 "assigned_rate_limits": { 00:05:15.156 "rw_ios_per_sec": 0, 00:05:15.156 "rw_mbytes_per_sec": 0, 00:05:15.156 "r_mbytes_per_sec": 0, 00:05:15.156 "w_mbytes_per_sec": 0 00:05:15.156 }, 00:05:15.156 "claimed": false, 00:05:15.156 "zoned": false, 00:05:15.156 "supported_io_types": { 00:05:15.156 "read": true, 00:05:15.156 "write": true, 00:05:15.156 "unmap": true, 00:05:15.156 "write_zeroes": true, 00:05:15.156 "flush": true, 00:05:15.156 "reset": true, 00:05:15.156 "compare": false, 00:05:15.156 "compare_and_write": false, 00:05:15.156 "abort": true, 00:05:15.157 "nvme_admin": false, 00:05:15.157 "nvme_io": false 00:05:15.157 }, 00:05:15.157 "memory_domains": [ 00:05:15.157 { 00:05:15.157 "dma_device_id": "system", 00:05:15.157 "dma_device_type": 1 00:05:15.157 }, 00:05:15.157 { 00:05:15.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.157 "dma_device_type": 2 00:05:15.157 } 00:05:15.157 ], 00:05:15.157 "driver_specific": {} 00:05:15.157 } 00:05:15.157 ]' 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.157 21:45:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.157 00:05:15.157 real 0m0.151s 00:05:15.157 user 0m0.094s 00:05:15.157 sys 0m0.018s 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.157 21:45:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.157 ************************************ 00:05:15.157 END TEST rpc_plugins 00:05:15.157 ************************************ 00:05:15.157 21:45:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.157 21:45:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.157 21:45:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.157 21:45:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.157 ************************************ 00:05:15.157 START TEST rpc_trace_cmd_test 00:05:15.157 ************************************ 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.157 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70795", 00:05:15.157 "tpoint_group_mask": "0x8", 00:05:15.157 "iscsi_conn": { 00:05:15.157 "mask": "0x2", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "scsi": { 00:05:15.157 "mask": "0x4", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "bdev": { 00:05:15.157 "mask": "0x8", 00:05:15.157 "tpoint_mask": "0xffffffffffffffff" 00:05:15.157 }, 00:05:15.157 "nvmf_rdma": { 00:05:15.157 "mask": "0x10", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "nvmf_tcp": { 00:05:15.157 "mask": "0x20", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "ftl": { 00:05:15.157 "mask": "0x40", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "blobfs": { 00:05:15.157 "mask": "0x80", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "dsa": { 00:05:15.157 "mask": "0x200", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "thread": { 00:05:15.157 "mask": "0x400", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "nvme_pcie": { 00:05:15.157 "mask": "0x800", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "iaa": { 00:05:15.157 "mask": "0x1000", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "nvme_tcp": { 00:05:15.157 "mask": "0x2000", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "bdev_nvme": { 00:05:15.157 "mask": "0x4000", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 }, 00:05:15.157 "sock": { 00:05:15.157 "mask": "0x8000", 00:05:15.157 "tpoint_mask": "0x0" 00:05:15.157 } 00:05:15.157 }' 00:05:15.157 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.415 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:15.415 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.415 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.415 21:45:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.415 00:05:15.415 real 0m0.262s 00:05:15.415 user 0m0.234s 00:05:15.415 sys 0m0.020s 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.415 ************************************ 00:05:15.415 END TEST rpc_trace_cmd_test 00:05:15.415 21:45:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.415 ************************************ 00:05:15.674 21:45:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.674 21:45:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.674 21:45:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.674 21:45:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.674 21:45:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.674 21:45:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 ************************************ 00:05:15.674 START TEST rpc_daemon_integrity 00:05:15.674 ************************************ 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.674 { 00:05:15.674 "name": "Malloc2", 00:05:15.674 "aliases": [ 00:05:15.674 "ecfee1de-2fad-4b12-95cc-7235bcd17200" 00:05:15.674 ], 00:05:15.674 "product_name": "Malloc disk", 00:05:15.674 "block_size": 512, 00:05:15.674 "num_blocks": 16384, 00:05:15.674 "uuid": "ecfee1de-2fad-4b12-95cc-7235bcd17200", 00:05:15.674 "assigned_rate_limits": { 00:05:15.674 "rw_ios_per_sec": 0, 00:05:15.674 "rw_mbytes_per_sec": 0, 00:05:15.674 "r_mbytes_per_sec": 0, 00:05:15.674 "w_mbytes_per_sec": 0 00:05:15.674 }, 00:05:15.674 "claimed": false, 00:05:15.674 "zoned": false, 00:05:15.674 "supported_io_types": { 00:05:15.674 "read": true, 00:05:15.674 "write": true, 00:05:15.674 "unmap": true, 00:05:15.674 "write_zeroes": true, 00:05:15.674 "flush": true, 00:05:15.674 "reset": true, 00:05:15.674 "compare": false, 00:05:15.674 "compare_and_write": false, 00:05:15.674 "abort": true, 00:05:15.674 "nvme_admin": false, 00:05:15.674 "nvme_io": false 00:05:15.674 }, 00:05:15.674 "memory_domains": [ 00:05:15.674 { 00:05:15.674 "dma_device_id": "system", 00:05:15.674 "dma_device_type": 1 00:05:15.674 }, 00:05:15.674 { 00:05:15.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.674 "dma_device_type": 2 00:05:15.674 } 00:05:15.674 ], 00:05:15.674 "driver_specific": {} 00:05:15.674 } 00:05:15.674 ]' 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 [2024-07-24 21:45:21.316600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.674 [2024-07-24 21:45:21.316678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.674 [2024-07-24 21:45:21.316698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ba30 00:05:15.674 [2024-07-24 21:45:21.316709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.674 [2024-07-24 21:45:21.318342] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.674 [2024-07-24 21:45:21.318381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.674 Passthru0 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.674 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.674 { 00:05:15.674 "name": "Malloc2", 00:05:15.674 "aliases": [ 00:05:15.674 "ecfee1de-2fad-4b12-95cc-7235bcd17200" 00:05:15.674 ], 00:05:15.674 "product_name": "Malloc disk", 00:05:15.674 "block_size": 512, 00:05:15.674 "num_blocks": 16384, 00:05:15.674 "uuid": "ecfee1de-2fad-4b12-95cc-7235bcd17200", 00:05:15.674 "assigned_rate_limits": { 00:05:15.674 "rw_ios_per_sec": 0, 00:05:15.674 "rw_mbytes_per_sec": 0, 00:05:15.674 "r_mbytes_per_sec": 0, 00:05:15.674 "w_mbytes_per_sec": 0 00:05:15.674 }, 00:05:15.674 "claimed": true, 00:05:15.674 "claim_type": "exclusive_write", 00:05:15.674 "zoned": false, 00:05:15.674 "supported_io_types": { 00:05:15.674 "read": true, 00:05:15.674 "write": true, 00:05:15.674 "unmap": true, 00:05:15.674 "write_zeroes": true, 00:05:15.674 "flush": true, 00:05:15.674 "reset": true, 00:05:15.674 "compare": false, 00:05:15.674 "compare_and_write": false, 00:05:15.674 "abort": true, 00:05:15.674 "nvme_admin": false, 00:05:15.674 "nvme_io": false 00:05:15.674 }, 00:05:15.674 "memory_domains": [ 00:05:15.674 { 00:05:15.674 "dma_device_id": "system", 00:05:15.674 "dma_device_type": 1 00:05:15.674 }, 00:05:15.674 { 00:05:15.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.674 "dma_device_type": 2 00:05:15.674 } 00:05:15.674 ], 00:05:15.674 "driver_specific": {} 00:05:15.674 }, 00:05:15.674 { 00:05:15.674 "name": "Passthru0", 00:05:15.674 "aliases": [ 00:05:15.675 "c5a66dd2-c5f1-5770-b761-0740d7b83cc3" 00:05:15.675 ], 00:05:15.675 "product_name": "passthru", 00:05:15.675 "block_size": 512, 00:05:15.675 "num_blocks": 16384, 00:05:15.675 "uuid": "c5a66dd2-c5f1-5770-b761-0740d7b83cc3", 00:05:15.675 "assigned_rate_limits": { 00:05:15.675 "rw_ios_per_sec": 0, 00:05:15.675 "rw_mbytes_per_sec": 0, 00:05:15.675 "r_mbytes_per_sec": 0, 00:05:15.675 "w_mbytes_per_sec": 0 00:05:15.675 }, 00:05:15.675 "claimed": false, 00:05:15.675 "zoned": false, 00:05:15.675 "supported_io_types": { 00:05:15.675 "read": true, 00:05:15.675 "write": true, 00:05:15.675 "unmap": true, 00:05:15.675 "write_zeroes": true, 00:05:15.675 "flush": true, 00:05:15.675 "reset": true, 00:05:15.675 "compare": false, 00:05:15.675 "compare_and_write": false, 00:05:15.675 "abort": true, 00:05:15.675 "nvme_admin": false, 00:05:15.675 "nvme_io": false 00:05:15.675 }, 00:05:15.675 "memory_domains": [ 00:05:15.675 { 00:05:15.675 "dma_device_id": "system", 00:05:15.675 "dma_device_type": 1 00:05:15.675 }, 00:05:15.675 { 00:05:15.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.675 "dma_device_type": 2 00:05:15.675 } 00:05:15.675 ], 00:05:15.675 "driver_specific": { 00:05:15.675 "passthru": { 00:05:15.675 "name": "Passthru0", 00:05:15.675 "base_bdev_name": "Malloc2" 00:05:15.675 } 00:05:15.675 } 00:05:15.675 } 00:05:15.675 ]' 00:05:15.675 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.933 00:05:15.933 real 0m0.324s 00:05:15.933 user 0m0.215s 00:05:15.933 sys 0m0.043s 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.933 21:45:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.933 ************************************ 00:05:15.933 END TEST rpc_daemon_integrity 00:05:15.933 ************************************ 00:05:15.933 21:45:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.933 21:45:21 rpc -- rpc/rpc.sh@84 -- # killprocess 70795 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@946 -- # '[' -z 70795 ']' 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@950 -- # kill -0 70795 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@951 -- # uname 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70795 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.933 killing process with pid 70795 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70795' 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@965 -- # kill 70795 00:05:15.933 21:45:21 rpc -- common/autotest_common.sh@970 -- # wait 70795 00:05:16.502 00:05:16.502 real 0m2.797s 00:05:16.502 user 0m3.664s 00:05:16.502 sys 0m0.654s 00:05:16.502 21:45:21 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.502 21:45:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.502 ************************************ 00:05:16.502 END TEST rpc 00:05:16.502 ************************************ 00:05:16.502 21:45:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:16.502 21:45:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.502 21:45:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.502 21:45:21 -- common/autotest_common.sh@10 -- # set +x 00:05:16.502 ************************************ 00:05:16.502 START TEST skip_rpc 00:05:16.502 ************************************ 00:05:16.502 21:45:21 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:16.502 * Looking for test storage... 00:05:16.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.502 21:45:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.502 21:45:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.502 21:45:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:16.502 21:45:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.502 21:45:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.502 21:45:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.502 ************************************ 00:05:16.502 START TEST skip_rpc 00:05:16.502 ************************************ 00:05:16.502 21:45:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:16.502 21:45:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70993 00:05:16.502 21:45:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:16.502 21:45:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.502 21:45:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:16.502 [2024-07-24 21:45:22.118458] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:16.502 [2024-07-24 21:45:22.118557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70993 ] 00:05:16.760 [2024-07-24 21:45:22.256249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.760 [2024-07-24 21:45:22.348973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.760 [2024-07-24 21:45:22.406586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70993 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 70993 ']' 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 70993 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70993 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.035 killing process with pid 70993 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70993' 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 70993 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 70993 00:05:22.035 00:05:22.035 real 0m5.416s 00:05:22.035 user 0m5.031s 00:05:22.035 sys 0m0.279s 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.035 21:45:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.035 ************************************ 00:05:22.035 END TEST skip_rpc 00:05:22.035 ************************************ 00:05:22.035 21:45:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:22.035 21:45:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.035 21:45:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.035 21:45:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.035 ************************************ 00:05:22.035 START TEST skip_rpc_with_json 00:05:22.035 ************************************ 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71074 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71074 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 71074 ']' 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.035 21:45:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.035 [2024-07-24 21:45:27.589277] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:22.035 [2024-07-24 21:45:27.590076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71074 ] 00:05:22.035 [2024-07-24 21:45:27.730547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.295 [2024-07-24 21:45:27.824052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.295 [2024-07-24 21:45:27.878390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.873 [2024-07-24 21:45:28.551660] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.873 request: 00:05:22.873 { 00:05:22.873 "trtype": "tcp", 00:05:22.873 "method": "nvmf_get_transports", 00:05:22.873 "req_id": 1 00:05:22.873 } 00:05:22.873 Got JSON-RPC error response 00:05:22.873 response: 00:05:22.873 { 00:05:22.873 "code": -19, 00:05:22.873 "message": "No such device" 00:05:22.873 } 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.873 [2024-07-24 21:45:28.559825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.873 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.157 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.157 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.157 { 00:05:23.157 "subsystems": [ 00:05:23.157 { 00:05:23.157 "subsystem": "keyring", 00:05:23.157 "config": [] 00:05:23.157 }, 00:05:23.157 { 00:05:23.157 "subsystem": "iobuf", 00:05:23.157 "config": [ 00:05:23.157 { 00:05:23.158 "method": "iobuf_set_options", 00:05:23.158 "params": { 00:05:23.158 "small_pool_count": 8192, 00:05:23.158 "large_pool_count": 1024, 00:05:23.158 "small_bufsize": 8192, 00:05:23.158 "large_bufsize": 135168 00:05:23.158 } 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "sock", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "sock_set_default_impl", 00:05:23.158 "params": { 00:05:23.158 "impl_name": "uring" 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "sock_impl_set_options", 00:05:23.158 "params": { 00:05:23.158 "impl_name": "ssl", 00:05:23.158 "recv_buf_size": 4096, 00:05:23.158 "send_buf_size": 4096, 00:05:23.158 "enable_recv_pipe": true, 00:05:23.158 "enable_quickack": false, 00:05:23.158 "enable_placement_id": 0, 00:05:23.158 "enable_zerocopy_send_server": true, 00:05:23.158 "enable_zerocopy_send_client": false, 00:05:23.158 "zerocopy_threshold": 0, 00:05:23.158 "tls_version": 0, 00:05:23.158 "enable_ktls": false 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "sock_impl_set_options", 00:05:23.158 "params": { 00:05:23.158 "impl_name": "posix", 00:05:23.158 "recv_buf_size": 2097152, 00:05:23.158 "send_buf_size": 2097152, 00:05:23.158 "enable_recv_pipe": true, 00:05:23.158 "enable_quickack": false, 00:05:23.158 "enable_placement_id": 0, 00:05:23.158 "enable_zerocopy_send_server": true, 00:05:23.158 "enable_zerocopy_send_client": false, 00:05:23.158 "zerocopy_threshold": 0, 00:05:23.158 "tls_version": 0, 00:05:23.158 "enable_ktls": false 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "sock_impl_set_options", 00:05:23.158 "params": { 00:05:23.158 "impl_name": "uring", 00:05:23.158 "recv_buf_size": 2097152, 00:05:23.158 "send_buf_size": 2097152, 00:05:23.158 "enable_recv_pipe": true, 00:05:23.158 "enable_quickack": false, 00:05:23.158 "enable_placement_id": 0, 00:05:23.158 "enable_zerocopy_send_server": false, 00:05:23.158 "enable_zerocopy_send_client": false, 00:05:23.158 "zerocopy_threshold": 0, 00:05:23.158 "tls_version": 0, 00:05:23.158 "enable_ktls": false 00:05:23.158 } 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "vmd", 00:05:23.158 "config": [] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "accel", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "accel_set_options", 00:05:23.158 "params": { 00:05:23.158 "small_cache_size": 128, 00:05:23.158 "large_cache_size": 16, 00:05:23.158 "task_count": 2048, 00:05:23.158 "sequence_count": 2048, 00:05:23.158 "buf_count": 2048 00:05:23.158 } 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "bdev", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "bdev_set_options", 00:05:23.158 "params": { 00:05:23.158 "bdev_io_pool_size": 65535, 00:05:23.158 "bdev_io_cache_size": 256, 00:05:23.158 "bdev_auto_examine": true, 00:05:23.158 "iobuf_small_cache_size": 128, 00:05:23.158 "iobuf_large_cache_size": 16 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "bdev_raid_set_options", 00:05:23.158 "params": { 00:05:23.158 "process_window_size_kb": 1024 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "bdev_iscsi_set_options", 00:05:23.158 "params": { 00:05:23.158 "timeout_sec": 30 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "bdev_nvme_set_options", 00:05:23.158 "params": { 00:05:23.158 "action_on_timeout": "none", 00:05:23.158 "timeout_us": 0, 00:05:23.158 "timeout_admin_us": 0, 00:05:23.158 "keep_alive_timeout_ms": 10000, 00:05:23.158 "arbitration_burst": 0, 00:05:23.158 "low_priority_weight": 0, 00:05:23.158 "medium_priority_weight": 0, 00:05:23.158 "high_priority_weight": 0, 00:05:23.158 "nvme_adminq_poll_period_us": 10000, 00:05:23.158 "nvme_ioq_poll_period_us": 0, 00:05:23.158 "io_queue_requests": 0, 00:05:23.158 "delay_cmd_submit": true, 00:05:23.158 "transport_retry_count": 4, 00:05:23.158 "bdev_retry_count": 3, 00:05:23.158 "transport_ack_timeout": 0, 00:05:23.158 "ctrlr_loss_timeout_sec": 0, 00:05:23.158 "reconnect_delay_sec": 0, 00:05:23.158 "fast_io_fail_timeout_sec": 0, 00:05:23.158 "disable_auto_failback": false, 00:05:23.158 "generate_uuids": false, 00:05:23.158 "transport_tos": 0, 00:05:23.158 "nvme_error_stat": false, 00:05:23.158 "rdma_srq_size": 0, 00:05:23.158 "io_path_stat": false, 00:05:23.158 "allow_accel_sequence": false, 00:05:23.158 "rdma_max_cq_size": 0, 00:05:23.158 "rdma_cm_event_timeout_ms": 0, 00:05:23.158 "dhchap_digests": [ 00:05:23.158 "sha256", 00:05:23.158 "sha384", 00:05:23.158 "sha512" 00:05:23.158 ], 00:05:23.158 "dhchap_dhgroups": [ 00:05:23.158 "null", 00:05:23.158 "ffdhe2048", 00:05:23.158 "ffdhe3072", 00:05:23.158 "ffdhe4096", 00:05:23.158 "ffdhe6144", 00:05:23.158 "ffdhe8192" 00:05:23.158 ] 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "bdev_nvme_set_hotplug", 00:05:23.158 "params": { 00:05:23.158 "period_us": 100000, 00:05:23.158 "enable": false 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "bdev_wait_for_examine" 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "scsi", 00:05:23.158 "config": null 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "scheduler", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "framework_set_scheduler", 00:05:23.158 "params": { 00:05:23.158 "name": "static" 00:05:23.158 } 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "vhost_scsi", 00:05:23.158 "config": [] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "vhost_blk", 00:05:23.158 "config": [] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "ublk", 00:05:23.158 "config": [] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "nbd", 00:05:23.158 "config": [] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "nvmf", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "nvmf_set_config", 00:05:23.158 "params": { 00:05:23.158 "discovery_filter": "match_any", 00:05:23.158 "admin_cmd_passthru": { 00:05:23.158 "identify_ctrlr": false 00:05:23.158 } 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "nvmf_set_max_subsystems", 00:05:23.158 "params": { 00:05:23.158 "max_subsystems": 1024 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "nvmf_set_crdt", 00:05:23.158 "params": { 00:05:23.158 "crdt1": 0, 00:05:23.158 "crdt2": 0, 00:05:23.158 "crdt3": 0 00:05:23.158 } 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "method": "nvmf_create_transport", 00:05:23.158 "params": { 00:05:23.158 "trtype": "TCP", 00:05:23.158 "max_queue_depth": 128, 00:05:23.158 "max_io_qpairs_per_ctrlr": 127, 00:05:23.158 "in_capsule_data_size": 4096, 00:05:23.158 "max_io_size": 131072, 00:05:23.158 "io_unit_size": 131072, 00:05:23.158 "max_aq_depth": 128, 00:05:23.158 "num_shared_buffers": 511, 00:05:23.158 "buf_cache_size": 4294967295, 00:05:23.158 "dif_insert_or_strip": false, 00:05:23.158 "zcopy": false, 00:05:23.158 "c2h_success": true, 00:05:23.158 "sock_priority": 0, 00:05:23.158 "abort_timeout_sec": 1, 00:05:23.158 "ack_timeout": 0, 00:05:23.158 "data_wr_pool_size": 0 00:05:23.158 } 00:05:23.158 } 00:05:23.158 ] 00:05:23.158 }, 00:05:23.158 { 00:05:23.158 "subsystem": "iscsi", 00:05:23.158 "config": [ 00:05:23.158 { 00:05:23.158 "method": "iscsi_set_options", 00:05:23.158 "params": { 00:05:23.158 "node_base": "iqn.2016-06.io.spdk", 00:05:23.158 "max_sessions": 128, 00:05:23.158 "max_connections_per_session": 2, 00:05:23.158 "max_queue_depth": 64, 00:05:23.158 "default_time2wait": 2, 00:05:23.158 "default_time2retain": 20, 00:05:23.158 "first_burst_length": 8192, 00:05:23.158 "immediate_data": true, 00:05:23.158 "allow_duplicated_isid": false, 00:05:23.158 "error_recovery_level": 0, 00:05:23.158 "nop_timeout": 60, 00:05:23.158 "nop_in_interval": 30, 00:05:23.158 "disable_chap": false, 00:05:23.158 "require_chap": false, 00:05:23.158 "mutual_chap": false, 00:05:23.158 "chap_group": 0, 00:05:23.158 "max_large_datain_per_connection": 64, 00:05:23.158 "max_r2t_per_connection": 4, 00:05:23.158 "pdu_pool_size": 36864, 00:05:23.159 "immediate_data_pool_size": 16384, 00:05:23.159 "data_out_pool_size": 2048 00:05:23.159 } 00:05:23.159 } 00:05:23.159 ] 00:05:23.159 } 00:05:23.159 ] 00:05:23.159 } 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71074 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 71074 ']' 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 71074 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71074 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.159 killing process with pid 71074 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71074' 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 71074 00:05:23.159 21:45:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 71074 00:05:23.727 21:45:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.727 21:45:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71107 00:05:23.727 21:45:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 71107 ']' 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.989 killing process with pid 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71107' 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 71107 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.989 00:05:28.989 real 0m7.021s 00:05:28.989 user 0m6.722s 00:05:28.989 sys 0m0.649s 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.989 ************************************ 00:05:28.989 END TEST skip_rpc_with_json 00:05:28.989 ************************************ 00:05:28.989 21:45:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.989 21:45:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.989 21:45:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.989 21:45:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.989 ************************************ 00:05:28.989 START TEST skip_rpc_with_delay 00:05:28.989 ************************************ 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.989 [2024-07-24 21:45:34.657106] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.989 [2024-07-24 21:45:34.657241] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.989 00:05:28.989 real 0m0.076s 00:05:28.989 user 0m0.044s 00:05:28.989 sys 0m0.031s 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.989 21:45:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.989 ************************************ 00:05:28.989 END TEST skip_rpc_with_delay 00:05:28.989 ************************************ 00:05:29.248 21:45:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.248 21:45:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.248 21:45:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.248 21:45:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.248 21:45:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.248 21:45:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.248 ************************************ 00:05:29.248 START TEST exit_on_failed_rpc_init 00:05:29.248 ************************************ 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71211 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71211 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 71211 ']' 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.248 21:45:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.248 [2024-07-24 21:45:34.788098] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:29.248 [2024-07-24 21:45:34.788212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71211 ] 00:05:29.248 [2024-07-24 21:45:34.928488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.507 [2024-07-24 21:45:35.028367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.507 [2024-07-24 21:45:35.088168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.073 21:45:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.331 [2024-07-24 21:45:35.799257] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:30.331 [2024-07-24 21:45:35.799340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:05:30.331 [2024-07-24 21:45:35.934065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.331 [2024-07-24 21:45:36.030116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.331 [2024-07-24 21:45:36.030202] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.331 [2024-07-24 21:45:36.030219] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.331 [2024-07-24 21:45:36.030228] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71211 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 71211 ']' 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 71211 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71211 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.589 killing process with pid 71211 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71211' 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 71211 00:05:30.589 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 71211 00:05:30.912 00:05:30.912 real 0m1.775s 00:05:30.912 user 0m2.027s 00:05:30.912 sys 0m0.425s 00:05:30.912 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.912 21:45:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.912 ************************************ 00:05:30.912 END TEST exit_on_failed_rpc_init 00:05:30.912 ************************************ 00:05:30.912 21:45:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.912 00:05:30.912 real 0m14.566s 00:05:30.912 user 0m13.918s 00:05:30.912 sys 0m1.560s 00:05:30.912 21:45:36 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.912 ************************************ 00:05:30.912 END TEST skip_rpc 00:05:30.912 ************************************ 00:05:30.912 21:45:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.912 21:45:36 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.912 21:45:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.912 21:45:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.912 21:45:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.912 ************************************ 00:05:30.912 START TEST rpc_client 00:05:30.912 ************************************ 00:05:30.912 21:45:36 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.172 * Looking for test storage... 00:05:31.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.172 21:45:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.172 OK 00:05:31.172 21:45:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.172 00:05:31.172 real 0m0.096s 00:05:31.172 user 0m0.043s 00:05:31.172 sys 0m0.058s 00:05:31.172 21:45:36 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.172 ************************************ 00:05:31.172 END TEST rpc_client 00:05:31.172 ************************************ 00:05:31.172 21:45:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.172 21:45:36 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.172 21:45:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.172 21:45:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.172 21:45:36 -- common/autotest_common.sh@10 -- # set +x 00:05:31.172 ************************************ 00:05:31.172 START TEST json_config 00:05:31.172 ************************************ 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.172 21:45:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.172 21:45:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.172 21:45:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.172 21:45:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.172 21:45:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.172 21:45:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.172 21:45:36 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.172 21:45:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.172 21:45:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.172 INFO: JSON configuration test init 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.172 21:45:36 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.172 21:45:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.172 21:45:36 json_config -- json_config/common.sh@10 -- # shift 00:05:31.172 21:45:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.172 21:45:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.172 21:45:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.172 21:45:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.172 21:45:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.172 21:45:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71347 00:05:31.172 Waiting for target to run... 00:05:31.172 21:45:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.172 21:45:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.172 21:45:36 json_config -- json_config/common.sh@25 -- # waitforlisten 71347 /var/tmp/spdk_tgt.sock 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@827 -- # '[' -z 71347 ']' 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.172 21:45:36 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.173 21:45:36 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.173 21:45:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.173 [2024-07-24 21:45:36.881127] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:31.173 [2024-07-24 21:45:36.881234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71347 ] 00:05:31.737 [2024-07-24 21:45:37.316382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.737 [2024-07-24 21:45:37.382499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.304 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:32.304 21:45:37 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.304 21:45:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.304 21:45:37 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.304 21:45:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.562 [2024-07-24 21:45:38.174955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.820 21:45:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.820 21:45:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:32.820 21:45:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.820 21:45:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.078 21:45:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.078 21:45:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.078 21:45:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:33.078 21:45:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.078 21:45:38 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.078 21:45:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.336 MallocForNvmf0 00:05:33.336 21:45:38 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.336 21:45:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.594 MallocForNvmf1 00:05:33.594 21:45:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.594 21:45:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.852 [2024-07-24 21:45:39.435700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.852 21:45:39 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.852 21:45:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.110 21:45:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.110 21:45:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.368 21:45:39 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.368 21:45:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.626 21:45:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.626 21:45:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.886 [2024-07-24 21:45:40.344242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.886 21:45:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.886 21:45:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.886 21:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.886 21:45:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.886 21:45:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.886 21:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.886 21:45:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.886 21:45:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.886 21:45:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.144 MallocBdevForConfigChangeCheck 00:05:35.144 21:45:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:35.144 21:45:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.144 21:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.144 21:45:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:35.144 21:45:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.709 INFO: shutting down applications... 00:05:35.709 21:45:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.709 21:45:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.709 21:45:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.709 21:45:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.709 21:45:41 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.966 Calling clear_iscsi_subsystem 00:05:35.966 Calling clear_nvmf_subsystem 00:05:35.966 Calling clear_nbd_subsystem 00:05:35.966 Calling clear_ublk_subsystem 00:05:35.966 Calling clear_vhost_blk_subsystem 00:05:35.966 Calling clear_vhost_scsi_subsystem 00:05:35.966 Calling clear_bdev_subsystem 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.966 21:45:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.224 21:45:41 json_config -- json_config/json_config.sh@345 -- # break 00:05:36.224 21:45:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.224 21:45:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.224 21:45:41 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.224 21:45:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.224 21:45:41 json_config -- json_config/common.sh@35 -- # [[ -n 71347 ]] 00:05:36.224 21:45:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71347 00:05:36.224 21:45:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.224 21:45:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.224 21:45:41 json_config -- json_config/common.sh@41 -- # kill -0 71347 00:05:36.224 21:45:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.790 21:45:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.790 21:45:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.790 21:45:42 json_config -- json_config/common.sh@41 -- # kill -0 71347 00:05:36.790 21:45:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.790 21:45:42 json_config -- json_config/common.sh@43 -- # break 00:05:36.790 21:45:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.790 21:45:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.790 SPDK target shutdown done 00:05:36.790 21:45:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.790 INFO: relaunching applications... 00:05:36.790 21:45:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.790 21:45:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.790 21:45:42 json_config -- json_config/common.sh@10 -- # shift 00:05:36.790 Waiting for target to run... 00:05:36.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.790 21:45:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.790 21:45:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.790 21:45:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.790 21:45:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.790 21:45:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.790 21:45:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71543 00:05:36.790 21:45:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.790 21:45:42 json_config -- json_config/common.sh@25 -- # waitforlisten 71543 /var/tmp/spdk_tgt.sock 00:05:36.790 21:45:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 71543 ']' 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.790 21:45:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.790 [2024-07-24 21:45:42.448985] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:36.790 [2024-07-24 21:45:42.449283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71543 ] 00:05:37.355 [2024-07-24 21:45:42.866001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.355 [2024-07-24 21:45:42.938552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.355 [2024-07-24 21:45:43.065972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:37.614 [2024-07-24 21:45:43.257407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.614 [2024-07-24 21:45:43.289545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.874 21:45:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.874 21:45:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:37.874 21:45:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.874 00:05:37.874 21:45:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.874 21:45:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.874 INFO: Checking if target configuration is the same... 00:05:37.874 21:45:43 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.874 21:45:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:37.874 21:45:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.874 + '[' 2 -ne 2 ']' 00:05:37.874 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:37.874 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:37.874 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:37.874 +++ basename /dev/fd/62 00:05:37.874 ++ mktemp /tmp/62.XXX 00:05:37.874 + tmp_file_1=/tmp/62.uRT 00:05:37.874 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.874 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.874 + tmp_file_2=/tmp/spdk_tgt_config.json.98B 00:05:37.874 + ret=0 00:05:37.874 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.132 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.390 + diff -u /tmp/62.uRT /tmp/spdk_tgt_config.json.98B 00:05:38.390 INFO: JSON config files are the same 00:05:38.390 + echo 'INFO: JSON config files are the same' 00:05:38.390 + rm /tmp/62.uRT /tmp/spdk_tgt_config.json.98B 00:05:38.390 + exit 0 00:05:38.390 INFO: changing configuration and checking if this can be detected... 00:05:38.390 21:45:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:38.390 21:45:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.390 21:45:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.390 21:45:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.647 21:45:44 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.647 21:45:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:38.647 21:45:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.647 + '[' 2 -ne 2 ']' 00:05:38.647 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:38.647 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:38.647 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.647 +++ basename /dev/fd/62 00:05:38.647 ++ mktemp /tmp/62.XXX 00:05:38.647 + tmp_file_1=/tmp/62.pbH 00:05:38.647 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.647 + tmp_file_2=/tmp/spdk_tgt_config.json.vAC 00:05:38.647 + ret=0 00:05:38.647 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.904 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.904 + diff -u /tmp/62.pbH /tmp/spdk_tgt_config.json.vAC 00:05:38.904 + ret=1 00:05:38.904 + echo '=== Start of file: /tmp/62.pbH ===' 00:05:38.904 + cat /tmp/62.pbH 00:05:38.904 + echo '=== End of file: /tmp/62.pbH ===' 00:05:38.904 + echo '' 00:05:38.904 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vAC ===' 00:05:38.904 + cat /tmp/spdk_tgt_config.json.vAC 00:05:38.904 + echo '=== End of file: /tmp/spdk_tgt_config.json.vAC ===' 00:05:38.904 + echo '' 00:05:38.904 + rm /tmp/62.pbH /tmp/spdk_tgt_config.json.vAC 00:05:38.904 + exit 1 00:05:38.904 INFO: configuration change detected. 00:05:38.904 21:45:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.904 21:45:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.904 21:45:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 71543 ]] 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.905 21:45:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.905 21:45:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 21:45:44 json_config -- json_config/json_config.sh@323 -- # killprocess 71543 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@946 -- # '[' -z 71543 ']' 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@950 -- # kill -0 71543 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@951 -- # uname 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71543 00:05:39.162 killing process with pid 71543 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71543' 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@965 -- # kill 71543 00:05:39.162 21:45:44 json_config -- common/autotest_common.sh@970 -- # wait 71543 00:05:39.162 21:45:44 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.421 21:45:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:39.421 21:45:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.421 21:45:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 21:45:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:39.421 INFO: Success 00:05:39.421 21:45:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:39.421 ************************************ 00:05:39.421 END TEST json_config 00:05:39.421 ************************************ 00:05:39.421 00:05:39.421 real 0m8.192s 00:05:39.421 user 0m11.719s 00:05:39.421 sys 0m1.673s 00:05:39.421 21:45:44 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.421 21:45:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 21:45:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.421 21:45:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.421 21:45:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.421 21:45:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 ************************************ 00:05:39.421 START TEST json_config_extra_key 00:05:39.421 ************************************ 00:05:39.421 21:45:44 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.421 21:45:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.421 21:45:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.421 21:45:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.421 21:45:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.421 21:45:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.421 21:45:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.421 21:45:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.421 21:45:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.421 21:45:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.421 INFO: launching applications... 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.421 21:45:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71678 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.421 Waiting for target to run... 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.421 21:45:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71678 /var/tmp/spdk_tgt.sock 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 71678 ']' 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.421 21:45:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 [2024-07-24 21:45:45.110166] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:39.421 [2024-07-24 21:45:45.110479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71678 ] 00:05:39.988 [2024-07-24 21:45:45.542421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.988 [2024-07-24 21:45:45.607384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.988 [2024-07-24 21:45:45.628027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.553 00:05:40.553 INFO: shutting down applications... 00:05:40.553 21:45:46 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.553 21:45:46 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:40.553 21:45:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:40.553 21:45:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:40.553 21:45:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:40.553 21:45:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:40.553 21:45:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.553 21:45:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71678 ]] 00:05:40.553 21:45:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71678 00:05:40.554 21:45:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.554 21:45:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.554 21:45:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71678 00:05:40.554 21:45:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71678 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.120 21:45:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.120 SPDK target shutdown done 00:05:41.120 21:45:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.120 Success 00:05:41.120 00:05:41.120 real 0m1.736s 00:05:41.120 user 0m1.680s 00:05:41.120 sys 0m0.458s 00:05:41.120 21:45:46 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.120 ************************************ 00:05:41.120 END TEST json_config_extra_key 00:05:41.120 ************************************ 00:05:41.120 21:45:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 21:45:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.121 21:45:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.121 21:45:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.121 21:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 ************************************ 00:05:41.121 START TEST alias_rpc 00:05:41.121 ************************************ 00:05:41.121 21:45:46 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.121 * Looking for test storage... 00:05:41.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:41.121 21:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.379 21:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71748 00:05:41.379 21:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.379 21:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71748 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 71748 ']' 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.379 21:45:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.379 [2024-07-24 21:45:46.897213] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:41.379 [2024-07-24 21:45:46.897543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71748 ] 00:05:41.379 [2024-07-24 21:45:47.039131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.638 [2024-07-24 21:45:47.131681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.638 [2024-07-24 21:45:47.189716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.897 21:45:47 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.897 21:45:47 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.897 21:45:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:42.156 21:45:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71748 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 71748 ']' 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 71748 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71748 00:05:42.156 killing process with pid 71748 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71748' 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@965 -- # kill 71748 00:05:42.156 21:45:47 alias_rpc -- common/autotest_common.sh@970 -- # wait 71748 00:05:42.414 ************************************ 00:05:42.414 END TEST alias_rpc 00:05:42.414 ************************************ 00:05:42.414 00:05:42.414 real 0m1.335s 00:05:42.414 user 0m1.427s 00:05:42.414 sys 0m0.409s 00:05:42.414 21:45:48 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.414 21:45:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.414 21:45:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:42.414 21:45:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.414 21:45:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.414 21:45:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.414 21:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 ************************************ 00:05:42.672 START TEST spdkcli_tcp 00:05:42.672 ************************************ 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.672 * Looking for test storage... 00:05:42.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71816 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.672 21:45:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71816 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 71816 ']' 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.672 21:45:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.672 [2024-07-24 21:45:48.276820] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:42.672 [2024-07-24 21:45:48.276935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71816 ] 00:05:42.931 [2024-07-24 21:45:48.414865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.931 [2024-07-24 21:45:48.513166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.931 [2024-07-24 21:45:48.513222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.931 [2024-07-24 21:45:48.572554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.866 21:45:49 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.866 21:45:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:43.866 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.866 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71834 00:05:43.866 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.866 [ 00:05:43.866 "bdev_malloc_delete", 00:05:43.866 "bdev_malloc_create", 00:05:43.866 "bdev_null_resize", 00:05:43.866 "bdev_null_delete", 00:05:43.866 "bdev_null_create", 00:05:43.866 "bdev_nvme_cuse_unregister", 00:05:43.866 "bdev_nvme_cuse_register", 00:05:43.866 "bdev_opal_new_user", 00:05:43.866 "bdev_opal_set_lock_state", 00:05:43.866 "bdev_opal_delete", 00:05:43.866 "bdev_opal_get_info", 00:05:43.866 "bdev_opal_create", 00:05:43.866 "bdev_nvme_opal_revert", 00:05:43.866 "bdev_nvme_opal_init", 00:05:43.866 "bdev_nvme_send_cmd", 00:05:43.866 "bdev_nvme_get_path_iostat", 00:05:43.866 "bdev_nvme_get_mdns_discovery_info", 00:05:43.866 "bdev_nvme_stop_mdns_discovery", 00:05:43.866 "bdev_nvme_start_mdns_discovery", 00:05:43.866 "bdev_nvme_set_multipath_policy", 00:05:43.866 "bdev_nvme_set_preferred_path", 00:05:43.866 "bdev_nvme_get_io_paths", 00:05:43.866 "bdev_nvme_remove_error_injection", 00:05:43.866 "bdev_nvme_add_error_injection", 00:05:43.866 "bdev_nvme_get_discovery_info", 00:05:43.866 "bdev_nvme_stop_discovery", 00:05:43.866 "bdev_nvme_start_discovery", 00:05:43.866 "bdev_nvme_get_controller_health_info", 00:05:43.866 "bdev_nvme_disable_controller", 00:05:43.866 "bdev_nvme_enable_controller", 00:05:43.866 "bdev_nvme_reset_controller", 00:05:43.866 "bdev_nvme_get_transport_statistics", 00:05:43.866 "bdev_nvme_apply_firmware", 00:05:43.866 "bdev_nvme_detach_controller", 00:05:43.866 "bdev_nvme_get_controllers", 00:05:43.866 "bdev_nvme_attach_controller", 00:05:43.866 "bdev_nvme_set_hotplug", 00:05:43.866 "bdev_nvme_set_options", 00:05:43.866 "bdev_passthru_delete", 00:05:43.866 "bdev_passthru_create", 00:05:43.866 "bdev_lvol_set_parent_bdev", 00:05:43.866 "bdev_lvol_set_parent", 00:05:43.866 "bdev_lvol_check_shallow_copy", 00:05:43.866 "bdev_lvol_start_shallow_copy", 00:05:43.866 "bdev_lvol_grow_lvstore", 00:05:43.866 "bdev_lvol_get_lvols", 00:05:43.866 "bdev_lvol_get_lvstores", 00:05:43.866 "bdev_lvol_delete", 00:05:43.866 "bdev_lvol_set_read_only", 00:05:43.866 "bdev_lvol_resize", 00:05:43.866 "bdev_lvol_decouple_parent", 00:05:43.866 "bdev_lvol_inflate", 00:05:43.866 "bdev_lvol_rename", 00:05:43.866 "bdev_lvol_clone_bdev", 00:05:43.866 "bdev_lvol_clone", 00:05:43.866 "bdev_lvol_snapshot", 00:05:43.866 "bdev_lvol_create", 00:05:43.866 "bdev_lvol_delete_lvstore", 00:05:43.866 "bdev_lvol_rename_lvstore", 00:05:43.866 "bdev_lvol_create_lvstore", 00:05:43.866 "bdev_raid_set_options", 00:05:43.866 "bdev_raid_remove_base_bdev", 00:05:43.866 "bdev_raid_add_base_bdev", 00:05:43.866 "bdev_raid_delete", 00:05:43.866 "bdev_raid_create", 00:05:43.866 "bdev_raid_get_bdevs", 00:05:43.866 "bdev_error_inject_error", 00:05:43.866 "bdev_error_delete", 00:05:43.866 "bdev_error_create", 00:05:43.866 "bdev_split_delete", 00:05:43.866 "bdev_split_create", 00:05:43.866 "bdev_delay_delete", 00:05:43.866 "bdev_delay_create", 00:05:43.866 "bdev_delay_update_latency", 00:05:43.866 "bdev_zone_block_delete", 00:05:43.866 "bdev_zone_block_create", 00:05:43.866 "blobfs_create", 00:05:43.866 "blobfs_detect", 00:05:43.866 "blobfs_set_cache_size", 00:05:43.866 "bdev_aio_delete", 00:05:43.866 "bdev_aio_rescan", 00:05:43.866 "bdev_aio_create", 00:05:43.866 "bdev_ftl_set_property", 00:05:43.866 "bdev_ftl_get_properties", 00:05:43.866 "bdev_ftl_get_stats", 00:05:43.866 "bdev_ftl_unmap", 00:05:43.866 "bdev_ftl_unload", 00:05:43.866 "bdev_ftl_delete", 00:05:43.866 "bdev_ftl_load", 00:05:43.866 "bdev_ftl_create", 00:05:43.866 "bdev_virtio_attach_controller", 00:05:43.866 "bdev_virtio_scsi_get_devices", 00:05:43.866 "bdev_virtio_detach_controller", 00:05:43.866 "bdev_virtio_blk_set_hotplug", 00:05:43.866 "bdev_iscsi_delete", 00:05:43.866 "bdev_iscsi_create", 00:05:43.866 "bdev_iscsi_set_options", 00:05:43.866 "bdev_uring_delete", 00:05:43.866 "bdev_uring_rescan", 00:05:43.866 "bdev_uring_create", 00:05:43.866 "accel_error_inject_error", 00:05:43.866 "ioat_scan_accel_module", 00:05:43.866 "dsa_scan_accel_module", 00:05:43.866 "iaa_scan_accel_module", 00:05:43.866 "keyring_file_remove_key", 00:05:43.866 "keyring_file_add_key", 00:05:43.866 "keyring_linux_set_options", 00:05:43.866 "iscsi_get_histogram", 00:05:43.866 "iscsi_enable_histogram", 00:05:43.866 "iscsi_set_options", 00:05:43.866 "iscsi_get_auth_groups", 00:05:43.866 "iscsi_auth_group_remove_secret", 00:05:43.866 "iscsi_auth_group_add_secret", 00:05:43.866 "iscsi_delete_auth_group", 00:05:43.866 "iscsi_create_auth_group", 00:05:43.866 "iscsi_set_discovery_auth", 00:05:43.866 "iscsi_get_options", 00:05:43.866 "iscsi_target_node_request_logout", 00:05:43.866 "iscsi_target_node_set_redirect", 00:05:43.866 "iscsi_target_node_set_auth", 00:05:43.866 "iscsi_target_node_add_lun", 00:05:43.866 "iscsi_get_stats", 00:05:43.866 "iscsi_get_connections", 00:05:43.866 "iscsi_portal_group_set_auth", 00:05:43.866 "iscsi_start_portal_group", 00:05:43.866 "iscsi_delete_portal_group", 00:05:43.866 "iscsi_create_portal_group", 00:05:43.866 "iscsi_get_portal_groups", 00:05:43.866 "iscsi_delete_target_node", 00:05:43.866 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.866 "iscsi_target_node_add_pg_ig_maps", 00:05:43.866 "iscsi_create_target_node", 00:05:43.866 "iscsi_get_target_nodes", 00:05:43.866 "iscsi_delete_initiator_group", 00:05:43.866 "iscsi_initiator_group_remove_initiators", 00:05:43.866 "iscsi_initiator_group_add_initiators", 00:05:43.866 "iscsi_create_initiator_group", 00:05:43.866 "iscsi_get_initiator_groups", 00:05:43.866 "nvmf_set_crdt", 00:05:43.866 "nvmf_set_config", 00:05:43.866 "nvmf_set_max_subsystems", 00:05:43.866 "nvmf_stop_mdns_prr", 00:05:43.866 "nvmf_publish_mdns_prr", 00:05:43.866 "nvmf_subsystem_get_listeners", 00:05:43.866 "nvmf_subsystem_get_qpairs", 00:05:43.866 "nvmf_subsystem_get_controllers", 00:05:43.866 "nvmf_get_stats", 00:05:43.866 "nvmf_get_transports", 00:05:43.866 "nvmf_create_transport", 00:05:43.866 "nvmf_get_targets", 00:05:43.866 "nvmf_delete_target", 00:05:43.866 "nvmf_create_target", 00:05:43.866 "nvmf_subsystem_allow_any_host", 00:05:43.866 "nvmf_subsystem_remove_host", 00:05:43.866 "nvmf_subsystem_add_host", 00:05:43.866 "nvmf_ns_remove_host", 00:05:43.866 "nvmf_ns_add_host", 00:05:43.866 "nvmf_subsystem_remove_ns", 00:05:43.866 "nvmf_subsystem_add_ns", 00:05:43.866 "nvmf_subsystem_listener_set_ana_state", 00:05:43.866 "nvmf_discovery_get_referrals", 00:05:43.866 "nvmf_discovery_remove_referral", 00:05:43.866 "nvmf_discovery_add_referral", 00:05:43.866 "nvmf_subsystem_remove_listener", 00:05:43.866 "nvmf_subsystem_add_listener", 00:05:43.866 "nvmf_delete_subsystem", 00:05:43.866 "nvmf_create_subsystem", 00:05:43.866 "nvmf_get_subsystems", 00:05:43.866 "env_dpdk_get_mem_stats", 00:05:43.866 "nbd_get_disks", 00:05:43.866 "nbd_stop_disk", 00:05:43.866 "nbd_start_disk", 00:05:43.866 "ublk_recover_disk", 00:05:43.866 "ublk_get_disks", 00:05:43.866 "ublk_stop_disk", 00:05:43.866 "ublk_start_disk", 00:05:43.866 "ublk_destroy_target", 00:05:43.866 "ublk_create_target", 00:05:43.866 "virtio_blk_create_transport", 00:05:43.866 "virtio_blk_get_transports", 00:05:43.866 "vhost_controller_set_coalescing", 00:05:43.866 "vhost_get_controllers", 00:05:43.866 "vhost_delete_controller", 00:05:43.866 "vhost_create_blk_controller", 00:05:43.866 "vhost_scsi_controller_remove_target", 00:05:43.866 "vhost_scsi_controller_add_target", 00:05:43.866 "vhost_start_scsi_controller", 00:05:43.866 "vhost_create_scsi_controller", 00:05:43.866 "thread_set_cpumask", 00:05:43.866 "framework_get_scheduler", 00:05:43.866 "framework_set_scheduler", 00:05:43.866 "framework_get_reactors", 00:05:43.866 "thread_get_io_channels", 00:05:43.866 "thread_get_pollers", 00:05:43.866 "thread_get_stats", 00:05:43.866 "framework_monitor_context_switch", 00:05:43.866 "spdk_kill_instance", 00:05:43.866 "log_enable_timestamps", 00:05:43.866 "log_get_flags", 00:05:43.866 "log_clear_flag", 00:05:43.866 "log_set_flag", 00:05:43.867 "log_get_level", 00:05:43.867 "log_set_level", 00:05:43.867 "log_get_print_level", 00:05:43.867 "log_set_print_level", 00:05:43.867 "framework_enable_cpumask_locks", 00:05:43.867 "framework_disable_cpumask_locks", 00:05:43.867 "framework_wait_init", 00:05:43.867 "framework_start_init", 00:05:43.867 "scsi_get_devices", 00:05:43.867 "bdev_get_histogram", 00:05:43.867 "bdev_enable_histogram", 00:05:43.867 "bdev_set_qos_limit", 00:05:43.867 "bdev_set_qd_sampling_period", 00:05:43.867 "bdev_get_bdevs", 00:05:43.867 "bdev_reset_iostat", 00:05:43.867 "bdev_get_iostat", 00:05:43.867 "bdev_examine", 00:05:43.867 "bdev_wait_for_examine", 00:05:43.867 "bdev_set_options", 00:05:43.867 "notify_get_notifications", 00:05:43.867 "notify_get_types", 00:05:43.867 "accel_get_stats", 00:05:43.867 "accel_set_options", 00:05:43.867 "accel_set_driver", 00:05:43.867 "accel_crypto_key_destroy", 00:05:43.867 "accel_crypto_keys_get", 00:05:43.867 "accel_crypto_key_create", 00:05:43.867 "accel_assign_opc", 00:05:43.867 "accel_get_module_info", 00:05:43.867 "accel_get_opc_assignments", 00:05:43.867 "vmd_rescan", 00:05:43.867 "vmd_remove_device", 00:05:43.867 "vmd_enable", 00:05:43.867 "sock_get_default_impl", 00:05:43.867 "sock_set_default_impl", 00:05:43.867 "sock_impl_set_options", 00:05:43.867 "sock_impl_get_options", 00:05:43.867 "iobuf_get_stats", 00:05:43.867 "iobuf_set_options", 00:05:43.867 "framework_get_pci_devices", 00:05:43.867 "framework_get_config", 00:05:43.867 "framework_get_subsystems", 00:05:43.867 "trace_get_info", 00:05:43.867 "trace_get_tpoint_group_mask", 00:05:43.867 "trace_disable_tpoint_group", 00:05:43.867 "trace_enable_tpoint_group", 00:05:43.867 "trace_clear_tpoint_mask", 00:05:43.867 "trace_set_tpoint_mask", 00:05:43.867 "keyring_get_keys", 00:05:43.867 "spdk_get_version", 00:05:43.867 "rpc_get_methods" 00:05:43.867 ] 00:05:43.867 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.867 21:45:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.867 21:45:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.125 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.125 21:45:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71816 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 71816 ']' 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 71816 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71816 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71816' 00:05:44.125 killing process with pid 71816 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 71816 00:05:44.125 21:45:49 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 71816 00:05:44.383 00:05:44.383 real 0m1.855s 00:05:44.383 user 0m3.527s 00:05:44.383 sys 0m0.460s 00:05:44.383 ************************************ 00:05:44.383 END TEST spdkcli_tcp 00:05:44.384 ************************************ 00:05:44.384 21:45:49 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.384 21:45:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.384 21:45:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.384 21:45:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.384 21:45:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.384 21:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.384 ************************************ 00:05:44.384 START TEST dpdk_mem_utility 00:05:44.384 ************************************ 00:05:44.384 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.642 * Looking for test storage... 00:05:44.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:44.642 21:45:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.642 21:45:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71902 00:05:44.642 21:45:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71902 00:05:44.642 21:45:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 71902 ']' 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.642 21:45:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.642 [2024-07-24 21:45:50.181208] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:44.642 [2024-07-24 21:45:50.181311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71902 ] 00:05:44.642 [2024-07-24 21:45:50.319583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.901 [2024-07-24 21:45:50.408421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.901 [2024-07-24 21:45:50.465584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.468 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.468 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:45.468 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.468 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.468 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.468 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.468 { 00:05:45.468 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.468 } 00:05:45.468 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.468 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.728 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:45.728 1 heaps totaling size 814.000000 MiB 00:05:45.728 size: 814.000000 MiB heap id: 0 00:05:45.728 end heaps---------- 00:05:45.728 8 mempools totaling size 598.116089 MiB 00:05:45.728 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.728 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.728 size: 84.521057 MiB name: bdev_io_71902 00:05:45.728 size: 51.011292 MiB name: evtpool_71902 00:05:45.728 size: 50.003479 MiB name: msgpool_71902 00:05:45.728 size: 21.763794 MiB name: PDU_Pool 00:05:45.728 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.728 size: 0.026123 MiB name: Session_Pool 00:05:45.728 end mempools------- 00:05:45.728 6 memzones totaling size 4.142822 MiB 00:05:45.728 size: 1.000366 MiB name: RG_ring_0_71902 00:05:45.728 size: 1.000366 MiB name: RG_ring_1_71902 00:05:45.728 size: 1.000366 MiB name: RG_ring_4_71902 00:05:45.728 size: 1.000366 MiB name: RG_ring_5_71902 00:05:45.728 size: 0.125366 MiB name: RG_ring_2_71902 00:05:45.728 size: 0.015991 MiB name: RG_ring_3_71902 00:05:45.728 end memzones------- 00:05:45.728 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.728 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:05:45.728 list of free elements. size: 12.472290 MiB 00:05:45.728 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:45.728 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:45.728 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:45.728 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:45.728 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:45.728 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:45.728 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:45.728 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:45.728 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:45.728 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:45.728 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:45.728 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:45.728 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:45.728 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:45.728 element at address: 0x200003a00000 with size: 0.348572 MiB 00:05:45.728 list of standard malloc elements. size: 199.265137 MiB 00:05:45.728 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:45.728 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:45.728 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:45.728 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:45.728 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:45.728 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:45.728 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:45.728 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:45.728 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:45.728 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:45.728 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:45.729 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:45.729 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:45.730 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:45.730 list of memzone associated elements. size: 602.262573 MiB 00:05:45.730 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:45.730 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.730 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:45.730 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.730 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:45.730 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_71902_0 00:05:45.730 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:45.730 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71902_0 00:05:45.730 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:45.730 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71902_0 00:05:45.730 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:45.730 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.730 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:45.730 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.730 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:45.730 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71902 00:05:45.730 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:45.730 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71902 00:05:45.730 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:45.730 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71902 00:05:45.730 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:45.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.730 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:45.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.730 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:45.730 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.730 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:45.730 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.730 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:45.730 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71902 00:05:45.730 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:45.730 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71902 00:05:45.730 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:45.730 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71902 00:05:45.730 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:45.730 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71902 00:05:45.730 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:45.730 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71902 00:05:45.730 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:45.730 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.730 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:45.730 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.730 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:45.730 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.730 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:45.730 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71902 00:05:45.730 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:45.730 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.730 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:45.730 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.730 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:45.730 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71902 00:05:45.730 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:45.730 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.730 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:45.730 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71902 00:05:45.730 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:45.730 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71902 00:05:45.730 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:45.730 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.730 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.730 21:45:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71902 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 71902 ']' 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 71902 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71902 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71902' 00:05:45.730 killing process with pid 71902 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 71902 00:05:45.730 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 71902 00:05:45.988 00:05:45.988 real 0m1.631s 00:05:45.988 user 0m1.745s 00:05:45.988 sys 0m0.424s 00:05:45.988 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.988 21:45:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.988 ************************************ 00:05:45.988 END TEST dpdk_mem_utility 00:05:45.988 ************************************ 00:05:45.988 21:45:51 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.988 21:45:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.988 21:45:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.988 21:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:46.249 ************************************ 00:05:46.249 START TEST event 00:05:46.249 ************************************ 00:05:46.249 21:45:51 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.249 * Looking for test storage... 00:05:46.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.249 21:45:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.249 21:45:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.249 21:45:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.249 21:45:51 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:46.249 21:45:51 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.249 21:45:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.249 ************************************ 00:05:46.249 START TEST event_perf 00:05:46.249 ************************************ 00:05:46.249 21:45:51 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.249 Running I/O for 1 seconds...[2024-07-24 21:45:51.825331] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:46.250 [2024-07-24 21:45:51.825438] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71979 ] 00:05:46.250 [2024-07-24 21:45:51.958940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.532 [2024-07-24 21:45:52.055170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.532 [2024-07-24 21:45:52.055246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.532 Running I/O for 1 seconds...[2024-07-24 21:45:52.056012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.532 [2024-07-24 21:45:52.056056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.466 00:05:47.466 lcore 0: 191203 00:05:47.466 lcore 1: 191203 00:05:47.466 lcore 2: 191204 00:05:47.466 lcore 3: 191205 00:05:47.466 done. 00:05:47.466 00:05:47.466 real 0m1.323s 00:05:47.466 user 0m4.144s 00:05:47.466 sys 0m0.058s 00:05:47.466 21:45:53 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.466 ************************************ 00:05:47.466 END TEST event_perf 00:05:47.466 ************************************ 00:05:47.466 21:45:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.466 21:45:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.466 21:45:53 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:47.466 21:45:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.466 21:45:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.466 ************************************ 00:05:47.466 START TEST event_reactor 00:05:47.466 ************************************ 00:05:47.466 21:45:53 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.724 [2024-07-24 21:45:53.196253] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:47.724 [2024-07-24 21:45:53.196343] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:05:47.724 [2024-07-24 21:45:53.327698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.724 [2024-07-24 21:45:53.414974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.099 test_start 00:05:49.099 oneshot 00:05:49.099 tick 100 00:05:49.099 tick 100 00:05:49.099 tick 250 00:05:49.099 tick 100 00:05:49.099 tick 100 00:05:49.099 tick 250 00:05:49.099 tick 500 00:05:49.099 tick 100 00:05:49.099 tick 100 00:05:49.099 tick 100 00:05:49.099 tick 250 00:05:49.099 tick 100 00:05:49.099 tick 100 00:05:49.099 test_end 00:05:49.099 00:05:49.099 real 0m1.298s 00:05:49.099 user 0m1.142s 00:05:49.099 sys 0m0.049s 00:05:49.099 21:45:54 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.099 21:45:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.099 ************************************ 00:05:49.099 END TEST event_reactor 00:05:49.099 ************************************ 00:05:49.099 21:45:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.099 21:45:54 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:49.099 21:45:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.099 21:45:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.099 ************************************ 00:05:49.099 START TEST event_reactor_perf 00:05:49.099 ************************************ 00:05:49.099 21:45:54 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.099 [2024-07-24 21:45:54.544665] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:49.099 [2024-07-24 21:45:54.544780] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72048 ] 00:05:49.099 [2024-07-24 21:45:54.687260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.099 [2024-07-24 21:45:54.765893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.475 test_start 00:05:50.475 test_end 00:05:50.475 Performance: 375032 events per second 00:05:50.475 00:05:50.475 real 0m1.309s 00:05:50.475 user 0m1.143s 00:05:50.475 sys 0m0.059s 00:05:50.475 21:45:55 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.475 ************************************ 00:05:50.475 END TEST event_reactor_perf 00:05:50.475 ************************************ 00:05:50.475 21:45:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.475 21:45:55 event -- event/event.sh@49 -- # uname -s 00:05:50.475 21:45:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.475 21:45:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.475 21:45:55 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.475 21:45:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.475 21:45:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.475 ************************************ 00:05:50.475 START TEST event_scheduler 00:05:50.475 ************************************ 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.475 * Looking for test storage... 00:05:50.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:50.475 21:45:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.475 21:45:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72109 00:05:50.475 21:45:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.475 21:45:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.475 21:45:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72109 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 72109 ']' 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.475 21:45:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.475 [2024-07-24 21:45:56.019778] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:50.475 [2024-07-24 21:45:56.019890] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72109 ] 00:05:50.475 [2024-07-24 21:45:56.163595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.764 [2024-07-24 21:45:56.258107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.764 [2024-07-24 21:45:56.258268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.764 [2024-07-24 21:45:56.258411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.764 [2024-07-24 21:45:56.258416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:51.330 21:45:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.330 POWER: Env isn't set yet! 00:05:51.330 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:51.330 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.330 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.330 POWER: Attempting to initialise PSTAT power management... 00:05:51.330 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.330 POWER: Cannot set governor of lcore 0 to performance 00:05:51.330 POWER: Attempting to initialise CPPC power management... 00:05:51.330 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.330 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.330 POWER: Attempting to initialise VM power management... 00:05:51.330 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:51.330 POWER: Unable to set Power Management Environment for lcore 0 00:05:51.330 [2024-07-24 21:45:56.972200] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:51.330 [2024-07-24 21:45:56.972214] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:51.330 [2024-07-24 21:45:56.972223] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.330 [2024-07-24 21:45:56.972242] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.330 [2024-07-24 21:45:56.972249] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.330 [2024-07-24 21:45:56.972257] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.330 21:45:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.330 21:45:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.330 [2024-07-24 21:45:57.033857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.589 [2024-07-24 21:45:57.064833] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.589 21:45:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.589 21:45:57 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.589 21:45:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 ************************************ 00:05:51.589 START TEST scheduler_create_thread 00:05:51.589 ************************************ 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 2 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 3 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 4 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 5 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 6 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 7 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 8 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 9 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.589 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.590 10 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.590 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.156 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.156 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.156 21:45:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.156 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.156 21:45:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.531 ************************************ 00:05:53.531 END TEST scheduler_create_thread 00:05:53.531 ************************************ 00:05:53.531 21:45:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.531 00:05:53.531 real 0m1.752s 00:05:53.531 user 0m0.015s 00:05:53.531 sys 0m0.007s 00:05:53.531 21:45:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.531 21:45:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.531 21:45:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.531 21:45:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72109 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 72109 ']' 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 72109 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72109 00:05:53.531 killing process with pid 72109 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72109' 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 72109 00:05:53.531 21:45:58 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 72109 00:05:53.789 [2024-07-24 21:45:59.307726] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.048 00:05:54.048 real 0m3.630s 00:05:54.048 user 0m6.503s 00:05:54.048 sys 0m0.368s 00:05:54.048 ************************************ 00:05:54.048 END TEST event_scheduler 00:05:54.048 ************************************ 00:05:54.048 21:45:59 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.048 21:45:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 21:45:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.048 21:45:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.048 21:45:59 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.048 21:45:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.048 21:45:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 ************************************ 00:05:54.048 START TEST app_repeat 00:05:54.048 ************************************ 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.048 Process app_repeat pid: 72198 00:05:54.048 spdk_app_start Round 0 00:05:54.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72198 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72198' 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.048 21:45:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72198 /var/tmp/spdk-nbd.sock 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 72198 ']' 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.048 21:45:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.048 [2024-07-24 21:45:59.608260] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:54.048 [2024-07-24 21:45:59.608391] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72198 ] 00:05:54.048 [2024-07-24 21:45:59.747761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.306 [2024-07-24 21:45:59.842792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.306 [2024-07-24 21:45:59.842803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.306 [2024-07-24 21:45:59.902490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.242 21:46:00 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.242 21:46:00 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:55.242 21:46:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.242 Malloc0 00:05:55.242 21:46:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.500 Malloc1 00:05:55.500 21:46:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.500 21:46:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.759 /dev/nbd0 00:05:56.017 21:46:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.017 21:46:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.017 1+0 records in 00:05:56.017 1+0 records out 00:05:56.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293462 s, 14.0 MB/s 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.017 21:46:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.017 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.017 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.017 21:46:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.276 /dev/nbd1 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.276 1+0 records in 00:05:56.276 1+0 records out 00:05:56.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368713 s, 11.1 MB/s 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.276 21:46:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.276 21:46:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.535 { 00:05:56.535 "nbd_device": "/dev/nbd0", 00:05:56.535 "bdev_name": "Malloc0" 00:05:56.535 }, 00:05:56.535 { 00:05:56.535 "nbd_device": "/dev/nbd1", 00:05:56.535 "bdev_name": "Malloc1" 00:05:56.535 } 00:05:56.535 ]' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.535 { 00:05:56.535 "nbd_device": "/dev/nbd0", 00:05:56.535 "bdev_name": "Malloc0" 00:05:56.535 }, 00:05:56.535 { 00:05:56.535 "nbd_device": "/dev/nbd1", 00:05:56.535 "bdev_name": "Malloc1" 00:05:56.535 } 00:05:56.535 ]' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.535 /dev/nbd1' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.535 /dev/nbd1' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.535 256+0 records in 00:05:56.535 256+0 records out 00:05:56.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00813797 s, 129 MB/s 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.535 256+0 records in 00:05:56.535 256+0 records out 00:05:56.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366176 s, 28.6 MB/s 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.535 256+0 records in 00:05:56.535 256+0 records out 00:05:56.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233778 s, 44.9 MB/s 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.535 21:46:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.103 21:46:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.361 21:46:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.620 21:46:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.620 21:46:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.879 21:46:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.139 [2024-07-24 21:46:03.727149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.139 [2024-07-24 21:46:03.814010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.139 [2024-07-24 21:46:03.814023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.397 [2024-07-24 21:46:03.873834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.397 [2024-07-24 21:46:03.873942] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.397 [2024-07-24 21:46:03.873955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.928 spdk_app_start Round 1 00:06:00.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.928 21:46:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.928 21:46:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.928 21:46:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72198 /var/tmp/spdk-nbd.sock 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 72198 ']' 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.928 21:46:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.187 21:46:06 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.187 21:46:06 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:01.187 21:46:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.446 Malloc0 00:06:01.446 21:46:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.705 Malloc1 00:06:01.705 21:46:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.705 21:46:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.964 /dev/nbd0 00:06:01.964 21:46:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.964 21:46:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.964 1+0 records in 00:06:01.964 1+0 records out 00:06:01.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016406 s, 25.0 MB/s 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.964 21:46:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.964 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.964 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.965 21:46:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.224 /dev/nbd1 00:06:02.224 21:46:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.224 21:46:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.224 1+0 records in 00:06:02.224 1+0 records out 00:06:02.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301611 s, 13.6 MB/s 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:02.224 21:46:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.483 21:46:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:02.483 21:46:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:02.483 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.483 21:46:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.483 21:46:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.483 21:46:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.483 21:46:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.742 { 00:06:02.742 "nbd_device": "/dev/nbd0", 00:06:02.742 "bdev_name": "Malloc0" 00:06:02.742 }, 00:06:02.742 { 00:06:02.742 "nbd_device": "/dev/nbd1", 00:06:02.742 "bdev_name": "Malloc1" 00:06:02.742 } 00:06:02.742 ]' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.742 { 00:06:02.742 "nbd_device": "/dev/nbd0", 00:06:02.742 "bdev_name": "Malloc0" 00:06:02.742 }, 00:06:02.742 { 00:06:02.742 "nbd_device": "/dev/nbd1", 00:06:02.742 "bdev_name": "Malloc1" 00:06:02.742 } 00:06:02.742 ]' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.742 /dev/nbd1' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.742 /dev/nbd1' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.742 21:46:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.743 256+0 records in 00:06:02.743 256+0 records out 00:06:02.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760149 s, 138 MB/s 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.743 256+0 records in 00:06:02.743 256+0 records out 00:06:02.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229182 s, 45.8 MB/s 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.743 256+0 records in 00:06:02.743 256+0 records out 00:06:02.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288611 s, 36.3 MB/s 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.743 21:46:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.001 21:46:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.260 21:46:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.518 21:46:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.518 21:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.518 21:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.776 21:46:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.776 21:46:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.034 21:46:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.293 [2024-07-24 21:46:09.790803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.293 [2024-07-24 21:46:09.874670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.293 [2024-07-24 21:46:09.874680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.293 [2024-07-24 21:46:09.932902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.293 [2024-07-24 21:46:09.933061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.293 [2024-07-24 21:46:09.933075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.576 spdk_app_start Round 2 00:06:07.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.576 21:46:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.576 21:46:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.576 21:46:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72198 /var/tmp/spdk-nbd.sock 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 72198 ']' 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.576 21:46:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:07.576 21:46:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.576 Malloc0 00:06:07.576 21:46:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.834 Malloc1 00:06:07.834 21:46:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.834 21:46:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.092 /dev/nbd0 00:06:08.092 21:46:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.092 21:46:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.092 1+0 records in 00:06:08.092 1+0 records out 00:06:08.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001902 s, 21.5 MB/s 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:08.092 21:46:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:08.092 21:46:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.092 21:46:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.092 21:46:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.350 /dev/nbd1 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.350 1+0 records in 00:06:08.350 1+0 records out 00:06:08.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207363 s, 19.8 MB/s 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:08.350 21:46:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.350 21:46:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.608 21:46:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.608 { 00:06:08.608 "nbd_device": "/dev/nbd0", 00:06:08.608 "bdev_name": "Malloc0" 00:06:08.608 }, 00:06:08.608 { 00:06:08.608 "nbd_device": "/dev/nbd1", 00:06:08.608 "bdev_name": "Malloc1" 00:06:08.608 } 00:06:08.608 ]' 00:06:08.608 21:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.608 { 00:06:08.608 "nbd_device": "/dev/nbd0", 00:06:08.608 "bdev_name": "Malloc0" 00:06:08.608 }, 00:06:08.608 { 00:06:08.608 "nbd_device": "/dev/nbd1", 00:06:08.608 "bdev_name": "Malloc1" 00:06:08.608 } 00:06:08.608 ]' 00:06:08.608 21:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.865 21:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.865 /dev/nbd1' 00:06:08.865 21:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.865 /dev/nbd1' 00:06:08.865 21:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.865 21:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.866 256+0 records in 00:06:08.866 256+0 records out 00:06:08.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00837563 s, 125 MB/s 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.866 256+0 records in 00:06:08.866 256+0 records out 00:06:08.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227637 s, 46.1 MB/s 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.866 256+0 records in 00:06:08.866 256+0 records out 00:06:08.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309971 s, 33.8 MB/s 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.866 21:46:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.123 21:46:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.381 21:46:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.638 21:46:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.638 21:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.638 21:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.896 21:46:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.896 21:46:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.154 21:46:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.154 [2024-07-24 21:46:15.854890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.412 [2024-07-24 21:46:15.928718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.412 [2024-07-24 21:46:15.928731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.412 [2024-07-24 21:46:15.986866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.412 [2024-07-24 21:46:15.986963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.412 [2024-07-24 21:46:15.986976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.995 21:46:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72198 /var/tmp/spdk-nbd.sock 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 72198 ']' 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.995 21:46:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.252 21:46:18 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.252 21:46:18 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:13.252 21:46:18 event.app_repeat -- event/event.sh@39 -- # killprocess 72198 00:06:13.252 21:46:18 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 72198 ']' 00:06:13.252 21:46:18 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 72198 00:06:13.253 21:46:18 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:13.253 21:46:18 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.253 21:46:18 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72198 00:06:13.511 killing process with pid 72198 00:06:13.511 21:46:18 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.511 21:46:18 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.511 21:46:18 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72198' 00:06:13.511 21:46:18 event.app_repeat -- common/autotest_common.sh@965 -- # kill 72198 00:06:13.511 21:46:18 event.app_repeat -- common/autotest_common.sh@970 -- # wait 72198 00:06:13.511 spdk_app_start is called in Round 0. 00:06:13.511 Shutdown signal received, stop current app iteration 00:06:13.511 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:13.511 spdk_app_start is called in Round 1. 00:06:13.511 Shutdown signal received, stop current app iteration 00:06:13.511 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:13.511 spdk_app_start is called in Round 2. 00:06:13.511 Shutdown signal received, stop current app iteration 00:06:13.511 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:13.511 spdk_app_start is called in Round 3. 00:06:13.511 Shutdown signal received, stop current app iteration 00:06:13.511 21:46:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.511 21:46:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:13.511 00:06:13.511 real 0m19.603s 00:06:13.511 user 0m44.213s 00:06:13.511 sys 0m3.074s 00:06:13.511 21:46:19 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.511 21:46:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.511 ************************************ 00:06:13.511 END TEST app_repeat 00:06:13.511 ************************************ 00:06:13.511 21:46:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.511 21:46:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.511 21:46:19 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.511 21:46:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.511 21:46:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.511 ************************************ 00:06:13.511 START TEST cpu_locks 00:06:13.511 ************************************ 00:06:13.769 21:46:19 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:13.769 * Looking for test storage... 00:06:13.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:13.769 21:46:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.769 21:46:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.769 21:46:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.769 21:46:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.769 21:46:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.769 21:46:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.769 21:46:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.769 ************************************ 00:06:13.769 START TEST default_locks 00:06:13.769 ************************************ 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72636 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72636 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 72636 ']' 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.769 21:46:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.769 [2024-07-24 21:46:19.377965] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:13.769 [2024-07-24 21:46:19.378071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72636 ] 00:06:14.027 [2024-07-24 21:46:19.513750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.027 [2024-07-24 21:46:19.616195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.027 [2024-07-24 21:46:19.675250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.960 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.960 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:14.960 21:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72636 00:06:14.960 21:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72636 00:06:14.960 21:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72636 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 72636 ']' 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 72636 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72636 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.219 killing process with pid 72636 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72636' 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 72636 00:06:15.219 21:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 72636 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72636 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72636 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 72636 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 72636 ']' 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.477 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (72636) - No such process 00:06:15.477 ERROR: process (pid: 72636) is no longer running 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.477 00:06:15.477 real 0m1.817s 00:06:15.477 user 0m1.932s 00:06:15.477 sys 0m0.543s 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.477 ************************************ 00:06:15.477 END TEST default_locks 00:06:15.477 ************************************ 00:06:15.477 21:46:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.477 21:46:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.477 21:46:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.477 21:46:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.477 21:46:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.477 ************************************ 00:06:15.477 START TEST default_locks_via_rpc 00:06:15.477 ************************************ 00:06:15.477 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:15.477 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72690 00:06:15.477 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.477 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72690 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 72690 ']' 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.735 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.735 [2024-07-24 21:46:21.248913] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:15.736 [2024-07-24 21:46:21.249042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72690 ] 00:06:15.736 [2024-07-24 21:46:21.383343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.993 [2024-07-24 21:46:21.477884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.993 [2024-07-24 21:46:21.533241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72690 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72690 00:06:16.251 21:46:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72690 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 72690 ']' 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 72690 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.509 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72690 00:06:16.769 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.769 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.769 killing process with pid 72690 00:06:16.769 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72690' 00:06:16.769 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 72690 00:06:16.769 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 72690 00:06:17.028 00:06:17.028 real 0m1.421s 00:06:17.028 user 0m1.382s 00:06:17.028 sys 0m0.569s 00:06:17.028 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.028 21:46:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.028 ************************************ 00:06:17.028 END TEST default_locks_via_rpc 00:06:17.028 ************************************ 00:06:17.028 21:46:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.028 21:46:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.028 21:46:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.028 21:46:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.028 ************************************ 00:06:17.028 START TEST non_locking_app_on_locked_coremask 00:06:17.028 ************************************ 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72729 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72729 /var/tmp/spdk.sock 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72729 ']' 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.028 21:46:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.028 [2024-07-24 21:46:22.735644] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:17.028 [2024-07-24 21:46:22.735754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72729 ] 00:06:17.294 [2024-07-24 21:46:22.871588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.295 [2024-07-24 21:46:22.969729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.566 [2024-07-24 21:46:23.026520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.134 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72745 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72745 /var/tmp/spdk2.sock 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72745 ']' 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.135 21:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.135 [2024-07-24 21:46:23.816416] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:18.135 [2024-07-24 21:46:23.816556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72745 ] 00:06:18.393 [2024-07-24 21:46:23.963089] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.393 [2024-07-24 21:46:23.963137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.651 [2024-07-24 21:46:24.147080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.651 [2024-07-24 21:46:24.267471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.241 21:46:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.241 21:46:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:19.241 21:46:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72729 00:06:19.241 21:46:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72729 00:06:19.241 21:46:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72729 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 72729 ']' 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 72729 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72729 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.176 killing process with pid 72729 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72729' 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 72729 00:06:20.176 21:46:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 72729 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72745 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 72745 ']' 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 72745 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72745 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.743 killing process with pid 72745 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72745' 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 72745 00:06:20.743 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 72745 00:06:21.309 00:06:21.309 real 0m4.162s 00:06:21.309 user 0m4.666s 00:06:21.309 sys 0m1.110s 00:06:21.309 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.309 21:46:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.309 ************************************ 00:06:21.309 END TEST non_locking_app_on_locked_coremask 00:06:21.309 ************************************ 00:06:21.309 21:46:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.309 21:46:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.309 21:46:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.309 21:46:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.309 ************************************ 00:06:21.309 START TEST locking_app_on_unlocked_coremask 00:06:21.309 ************************************ 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72812 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72812 /var/tmp/spdk.sock 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72812 ']' 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.309 21:46:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.309 [2024-07-24 21:46:26.951969] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:21.309 [2024-07-24 21:46:26.952111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72812 ] 00:06:21.569 [2024-07-24 21:46:27.091589] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.569 [2024-07-24 21:46:27.091670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.569 [2024-07-24 21:46:27.191912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.569 [2024-07-24 21:46:27.251099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72828 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72828 /var/tmp/spdk2.sock 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72828 ']' 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.504 21:46:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.504 [2024-07-24 21:46:27.973270] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:22.504 [2024-07-24 21:46:27.973397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72828 ] 00:06:22.504 [2024-07-24 21:46:28.111974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.762 [2024-07-24 21:46:28.288008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.763 [2024-07-24 21:46:28.394947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.366 21:46:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.366 21:46:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:23.366 21:46:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72828 00:06:23.366 21:46:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72828 00:06:23.366 21:46:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72812 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 72812 ']' 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 72812 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72812 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.933 killing process with pid 72812 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72812' 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 72812 00:06:23.933 21:46:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 72812 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72828 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 72828 ']' 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 72828 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72828 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.868 killing process with pid 72828 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72828' 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 72828 00:06:24.868 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 72828 00:06:25.127 00:06:25.127 real 0m3.862s 00:06:25.127 user 0m4.266s 00:06:25.127 sys 0m1.037s 00:06:25.127 ************************************ 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.127 END TEST locking_app_on_unlocked_coremask 00:06:25.127 ************************************ 00:06:25.127 21:46:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.127 21:46:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.127 21:46:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.127 21:46:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.127 ************************************ 00:06:25.127 START TEST locking_app_on_locked_coremask 00:06:25.127 ************************************ 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72895 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72895 /var/tmp/spdk.sock 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72895 ']' 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.127 21:46:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.385 [2024-07-24 21:46:30.845394] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:25.385 [2024-07-24 21:46:30.845508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:06:25.385 [2024-07-24 21:46:30.977185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.385 [2024-07-24 21:46:31.061654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.644 [2024-07-24 21:46:31.121090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72911 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72911 /var/tmp/spdk2.sock 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72911 /var/tmp/spdk2.sock 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72911 /var/tmp/spdk2.sock 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 72911 ']' 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.211 21:46:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.211 [2024-07-24 21:46:31.856587] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:26.211 [2024-07-24 21:46:31.856713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72911 ] 00:06:26.469 [2024-07-24 21:46:32.001757] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72895 has claimed it. 00:06:26.469 [2024-07-24 21:46:32.001853] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.035 ERROR: process (pid: 72911) is no longer running 00:06:27.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (72911) - No such process 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72895 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72895 00:06:27.035 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72895 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 72895 ']' 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 72895 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72895 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.294 killing process with pid 72895 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72895' 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 72895 00:06:27.294 21:46:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 72895 00:06:27.861 00:06:27.861 real 0m2.482s 00:06:27.861 user 0m2.834s 00:06:27.861 sys 0m0.607s 00:06:27.861 ************************************ 00:06:27.861 21:46:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.861 21:46:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.861 END TEST locking_app_on_locked_coremask 00:06:27.861 ************************************ 00:06:27.861 21:46:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.861 21:46:33 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.861 21:46:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.861 21:46:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.861 ************************************ 00:06:27.861 START TEST locking_overlapped_coremask 00:06:27.861 ************************************ 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72957 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72957 /var/tmp/spdk.sock 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 72957 ']' 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.861 21:46:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.861 [2024-07-24 21:46:33.401987] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:27.861 [2024-07-24 21:46:33.402153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72957 ] 00:06:27.861 [2024-07-24 21:46:33.543219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.119 [2024-07-24 21:46:33.626137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.119 [2024-07-24 21:46:33.626290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.119 [2024-07-24 21:46:33.626293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.119 [2024-07-24 21:46:33.684832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72975 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72975 /var/tmp/spdk2.sock 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72975 /var/tmp/spdk2.sock 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72975 /var/tmp/spdk2.sock 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 72975 ']' 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.685 21:46:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.944 [2024-07-24 21:46:34.402707] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:28.944 [2024-07-24 21:46:34.402805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ] 00:06:28.944 [2024-07-24 21:46:34.551287] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72957 has claimed it. 00:06:28.944 [2024-07-24 21:46:34.551384] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.528 ERROR: process (pid: 72975) is no longer running 00:06:29.528 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (72975) - No such process 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72957 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 72957 ']' 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 72957 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72957 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72957' 00:06:29.528 killing process with pid 72957 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 72957 00:06:29.528 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 72957 00:06:30.100 00:06:30.100 real 0m2.184s 00:06:30.100 user 0m6.070s 00:06:30.100 sys 0m0.462s 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.100 ************************************ 00:06:30.100 END TEST locking_overlapped_coremask 00:06:30.100 ************************************ 00:06:30.100 21:46:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.100 21:46:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.100 21:46:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.100 21:46:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.100 ************************************ 00:06:30.100 START TEST locking_overlapped_coremask_via_rpc 00:06:30.100 ************************************ 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73022 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73022 /var/tmp/spdk.sock 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 73022 ']' 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.100 21:46:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.100 [2024-07-24 21:46:35.618236] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.100 [2024-07-24 21:46:35.618324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73022 ] 00:06:30.100 [2024-07-24 21:46:35.752485] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.100 [2024-07-24 21:46:35.752534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.358 [2024-07-24 21:46:35.844606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.358 [2024-07-24 21:46:35.844742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.358 [2024-07-24 21:46:35.844747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.358 [2024-07-24 21:46:35.897276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73040 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73040 /var/tmp/spdk2.sock 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 73040 ']' 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.925 21:46:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.925 [2024-07-24 21:46:36.638842] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:30.925 [2024-07-24 21:46:36.638938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73040 ] 00:06:31.182 [2024-07-24 21:46:36.780161] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.182 [2024-07-24 21:46:36.780211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.440 [2024-07-24 21:46:36.956274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.440 [2024-07-24 21:46:36.956394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.440 [2024-07-24 21:46:36.956395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.440 [2024-07-24 21:46:37.061534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.007 [2024-07-24 21:46:37.668731] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73022 has claimed it. 00:06:32.007 request: 00:06:32.007 { 00:06:32.007 "method": "framework_enable_cpumask_locks", 00:06:32.007 "req_id": 1 00:06:32.007 } 00:06:32.007 Got JSON-RPC error response 00:06:32.007 response: 00:06:32.007 { 00:06:32.007 "code": -32603, 00:06:32.007 "message": "Failed to claim CPU core: 2" 00:06:32.007 } 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73022 /var/tmp/spdk.sock 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 73022 ']' 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.007 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73040 /var/tmp/spdk2.sock 00:06:32.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 73040 ']' 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.266 21:46:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 ************************************ 00:06:32.525 END TEST locking_overlapped_coremask_via_rpc 00:06:32.525 ************************************ 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.525 00:06:32.525 real 0m2.596s 00:06:32.525 user 0m1.323s 00:06:32.525 sys 0m0.200s 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.525 21:46:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 21:46:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.525 21:46:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73022 ]] 00:06:32.525 21:46:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73022 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 73022 ']' 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 73022 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73022 00:06:32.525 killing process with pid 73022 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73022' 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 73022 00:06:32.525 21:46:38 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 73022 00:06:33.093 21:46:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73040 ]] 00:06:33.093 21:46:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73040 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 73040 ']' 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 73040 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73040 00:06:33.093 killing process with pid 73040 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73040' 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 73040 00:06:33.093 21:46:38 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 73040 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.352 Process with pid 73022 is not found 00:06:33.352 Process with pid 73040 is not found 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73022 ]] 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73022 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 73022 ']' 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 73022 00:06:33.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (73022) - No such process 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 73022 is not found' 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73040 ]] 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73040 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 73040 ']' 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 73040 00:06:33.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (73040) - No such process 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 73040 is not found' 00:06:33.352 21:46:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.352 00:06:33.352 real 0m19.783s 00:06:33.352 user 0m34.723s 00:06:33.352 sys 0m5.344s 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.352 ************************************ 00:06:33.352 21:46:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.352 END TEST cpu_locks 00:06:33.352 ************************************ 00:06:33.352 00:06:33.352 real 0m47.337s 00:06:33.352 user 1m32.012s 00:06:33.352 sys 0m9.178s 00:06:33.352 21:46:39 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.352 ************************************ 00:06:33.352 21:46:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.352 END TEST event 00:06:33.352 ************************************ 00:06:33.611 21:46:39 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.611 21:46:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.611 21:46:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.611 21:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.611 ************************************ 00:06:33.611 START TEST thread 00:06:33.611 ************************************ 00:06:33.611 21:46:39 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:33.611 * Looking for test storage... 00:06:33.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:33.611 21:46:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.611 21:46:39 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:33.611 21:46:39 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.611 21:46:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.611 ************************************ 00:06:33.611 START TEST thread_poller_perf 00:06:33.611 ************************************ 00:06:33.611 21:46:39 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.611 [2024-07-24 21:46:39.205948] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:33.611 [2024-07-24 21:46:39.206058] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73157 ] 00:06:33.869 [2024-07-24 21:46:39.343541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.869 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.869 [2024-07-24 21:46:39.421789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.804 ====================================== 00:06:34.804 busy:2209788833 (cyc) 00:06:34.804 total_run_count: 330000 00:06:34.804 tsc_hz: 2200000000 (cyc) 00:06:34.804 ====================================== 00:06:34.804 poller_cost: 6696 (cyc), 3043 (nsec) 00:06:34.804 00:06:34.804 real 0m1.304s 00:06:34.804 user 0m1.140s 00:06:34.804 sys 0m0.057s 00:06:34.804 21:46:40 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.804 21:46:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.804 ************************************ 00:06:34.804 END TEST thread_poller_perf 00:06:34.804 ************************************ 00:06:35.063 21:46:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.063 21:46:40 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:35.063 21:46:40 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.063 21:46:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.063 ************************************ 00:06:35.063 START TEST thread_poller_perf 00:06:35.063 ************************************ 00:06:35.063 21:46:40 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:35.063 [2024-07-24 21:46:40.561966] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:35.063 [2024-07-24 21:46:40.562069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73194 ] 00:06:35.063 [2024-07-24 21:46:40.697445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.322 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.322 [2024-07-24 21:46:40.781109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.257 ====================================== 00:06:36.257 busy:2202201640 (cyc) 00:06:36.257 total_run_count: 4402000 00:06:36.257 tsc_hz: 2200000000 (cyc) 00:06:36.257 ====================================== 00:06:36.257 poller_cost: 500 (cyc), 227 (nsec) 00:06:36.257 00:06:36.257 real 0m1.308s 00:06:36.257 user 0m1.146s 00:06:36.257 sys 0m0.055s 00:06:36.257 21:46:41 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.257 21:46:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.257 ************************************ 00:06:36.257 END TEST thread_poller_perf 00:06:36.257 ************************************ 00:06:36.258 21:46:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.258 00:06:36.258 real 0m2.800s 00:06:36.258 user 0m2.357s 00:06:36.258 sys 0m0.221s 00:06:36.258 21:46:41 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.258 21:46:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.258 ************************************ 00:06:36.258 END TEST thread 00:06:36.258 ************************************ 00:06:36.258 21:46:41 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:36.258 21:46:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.258 21:46:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.258 21:46:41 -- common/autotest_common.sh@10 -- # set +x 00:06:36.258 ************************************ 00:06:36.258 START TEST accel 00:06:36.258 ************************************ 00:06:36.258 21:46:41 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:36.531 * Looking for test storage... 00:06:36.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:36.531 21:46:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:36.531 21:46:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:36.531 21:46:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.531 21:46:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=73268 00:06:36.531 21:46:42 accel -- accel/accel.sh@63 -- # waitforlisten 73268 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@827 -- # '[' -z 73268 ']' 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.531 21:46:42 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.531 21:46:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.531 21:46:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.531 21:46:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.531 21:46:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.531 21:46:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.531 21:46:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.531 21:46:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.531 21:46:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:36.531 21:46:42 accel -- accel/accel.sh@41 -- # jq -r . 00:06:36.531 [2024-07-24 21:46:42.088353] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:36.531 [2024-07-24 21:46:42.088474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73268 ] 00:06:36.531 [2024-07-24 21:46:42.224466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.817 [2024-07-24 21:46:42.315827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.817 [2024-07-24 21:46:42.374161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.383 21:46:43 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.383 21:46:43 accel -- common/autotest_common.sh@860 -- # return 0 00:06:37.383 21:46:43 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:37.383 21:46:43 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:37.383 21:46:43 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:37.383 21:46:43 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:37.383 21:46:43 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:37.383 21:46:43 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:37.383 21:46:43 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:37.383 21:46:43 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.383 21:46:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:37.643 21:46:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:37.643 21:46:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.643 21:46:43 accel -- accel/accel.sh@75 -- # killprocess 73268 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@946 -- # '[' -z 73268 ']' 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@950 -- # kill -0 73268 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@951 -- # uname 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73268 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.643 killing process with pid 73268 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73268' 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@965 -- # kill 73268 00:06:37.643 21:46:43 accel -- common/autotest_common.sh@970 -- # wait 73268 00:06:37.901 21:46:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:37.901 21:46:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:37.901 21:46:43 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:37.901 21:46:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.901 21:46:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.901 21:46:43 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:37.901 21:46:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:37.901 21:46:43 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.901 21:46:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:38.158 21:46:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:38.158 21:46:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:38.158 21:46:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.158 21:46:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.158 ************************************ 00:06:38.158 START TEST accel_missing_filename 00:06:38.158 ************************************ 00:06:38.158 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.159 21:46:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:38.159 21:46:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:38.159 [2024-07-24 21:46:43.676592] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:38.159 [2024-07-24 21:46:43.676722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73320 ] 00:06:38.159 [2024-07-24 21:46:43.811141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.416 [2024-07-24 21:46:43.901672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.416 [2024-07-24 21:46:43.959616] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.416 [2024-07-24 21:46:44.040062] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:38.416 A filename is required. 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.416 00:06:38.416 real 0m0.455s 00:06:38.416 user 0m0.283s 00:06:38.416 sys 0m0.120s 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.416 21:46:44 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:38.416 ************************************ 00:06:38.416 END TEST accel_missing_filename 00:06:38.416 ************************************ 00:06:38.674 21:46:44 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.674 21:46:44 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:38.674 21:46:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.675 21:46:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.675 ************************************ 00:06:38.675 START TEST accel_compress_verify 00:06:38.675 ************************************ 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.675 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:38.675 21:46:44 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:38.675 [2024-07-24 21:46:44.181884] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:38.675 [2024-07-24 21:46:44.181974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73344 ] 00:06:38.675 [2024-07-24 21:46:44.318935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.933 [2024-07-24 21:46:44.401588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.933 [2024-07-24 21:46:44.457996] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.934 [2024-07-24 21:46:44.534127] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:38.934 00:06:38.934 Compression does not support the verify option, aborting. 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:38.934 ************************************ 00:06:38.934 END TEST accel_compress_verify 00:06:38.934 ************************************ 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.934 00:06:38.934 real 0m0.445s 00:06:38.934 user 0m0.265s 00:06:38.934 sys 0m0.119s 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.934 21:46:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:38.934 21:46:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.934 21:46:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:38.934 21:46:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.934 21:46:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.934 ************************************ 00:06:38.934 START TEST accel_wrong_workload 00:06:38.934 ************************************ 00:06:38.934 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:38.934 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:38.934 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.934 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:39.193 21:46:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:39.193 Unsupported workload type: foobar 00:06:39.193 [2024-07-24 21:46:44.671869] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:39.193 accel_perf options: 00:06:39.193 [-h help message] 00:06:39.193 [-q queue depth per core] 00:06:39.193 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.193 [-T number of threads per core 00:06:39.193 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.193 [-t time in seconds] 00:06:39.193 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.193 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.193 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.193 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.193 [-S for crc32c workload, use this seed value (default 0) 00:06:39.193 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.193 [-f for fill workload, use this BYTE value (default 255) 00:06:39.193 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.193 [-y verify result if this switch is on] 00:06:39.193 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.193 Can be used to spread operations across a wider range of memory. 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.193 00:06:39.193 real 0m0.030s 00:06:39.193 user 0m0.016s 00:06:39.193 sys 0m0.013s 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.193 21:46:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 ************************************ 00:06:39.193 END TEST accel_wrong_workload 00:06:39.193 ************************************ 00:06:39.193 21:46:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 ************************************ 00:06:39.193 START TEST accel_negative_buffers 00:06:39.193 ************************************ 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:39.193 21:46:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:39.193 -x option must be non-negative. 00:06:39.193 [2024-07-24 21:46:44.751535] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:39.193 accel_perf options: 00:06:39.193 [-h help message] 00:06:39.193 [-q queue depth per core] 00:06:39.193 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.193 [-T number of threads per core 00:06:39.193 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.193 [-t time in seconds] 00:06:39.193 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.193 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.193 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.193 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.193 [-S for crc32c workload, use this seed value (default 0) 00:06:39.193 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.193 [-f for fill workload, use this BYTE value (default 255) 00:06:39.193 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.193 [-y verify result if this switch is on] 00:06:39.193 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.193 Can be used to spread operations across a wider range of memory. 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.193 ************************************ 00:06:39.193 END TEST accel_negative_buffers 00:06:39.193 ************************************ 00:06:39.193 00:06:39.193 real 0m0.032s 00:06:39.193 user 0m0.016s 00:06:39.193 sys 0m0.015s 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.193 21:46:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 21:46:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.193 21:46:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 ************************************ 00:06:39.193 START TEST accel_crc32c 00:06:39.193 ************************************ 00:06:39.193 21:46:44 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.193 21:46:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.194 21:46:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.194 21:46:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:39.194 21:46:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:39.194 [2024-07-24 21:46:44.829067] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.194 [2024-07-24 21:46:44.829155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73403 ] 00:06:39.453 [2024-07-24 21:46:44.970583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.453 [2024-07-24 21:46:45.047553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.453 21:46:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 ************************************ 00:06:40.829 END TEST accel_crc32c 00:06:40.829 ************************************ 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.829 21:46:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.829 00:06:40.829 real 0m1.454s 00:06:40.829 user 0m1.227s 00:06:40.829 sys 0m0.130s 00:06:40.829 21:46:46 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.829 21:46:46 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:40.829 21:46:46 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:40.829 21:46:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:40.829 21:46:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.829 21:46:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.829 ************************************ 00:06:40.829 START TEST accel_crc32c_C2 00:06:40.829 ************************************ 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.829 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:40.829 [2024-07-24 21:46:46.333510] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:40.829 [2024-07-24 21:46:46.333607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73437 ] 00:06:40.829 [2024-07-24 21:46:46.460131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.829 [2024-07-24 21:46:46.539844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.088 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.089 21:46:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.464 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 ************************************ 00:06:42.465 END TEST accel_crc32c_C2 00:06:42.465 ************************************ 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.465 00:06:42.465 real 0m1.438s 00:06:42.465 user 0m1.230s 00:06:42.465 sys 0m0.116s 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.465 21:46:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:42.465 21:46:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.465 21:46:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:42.465 21:46:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.465 21:46:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.465 ************************************ 00:06:42.465 START TEST accel_copy 00:06:42.465 ************************************ 00:06:42.465 21:46:47 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.465 21:46:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.465 [2024-07-24 21:46:47.830319] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:42.465 [2024-07-24 21:46:47.830414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73472 ] 00:06:42.465 [2024-07-24 21:46:47.960475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.465 [2024-07-24 21:46:48.038452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.465 21:46:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.839 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.840 ************************************ 00:06:43.840 END TEST accel_copy 00:06:43.840 ************************************ 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:43.840 21:46:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.840 00:06:43.840 real 0m1.451s 00:06:43.840 user 0m1.241s 00:06:43.840 sys 0m0.114s 00:06:43.840 21:46:49 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.840 21:46:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 21:46:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.840 21:46:49 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:43.840 21:46:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.840 21:46:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.840 ************************************ 00:06:43.840 START TEST accel_fill 00:06:43.840 ************************************ 00:06:43.840 21:46:49 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:43.840 21:46:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:43.840 [2024-07-24 21:46:49.332709] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:43.840 [2024-07-24 21:46:49.332792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73506 ] 00:06:43.840 [2024-07-24 21:46:49.466174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.840 [2024-07-24 21:46:49.551085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.098 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.099 21:46:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:45.475 21:46:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.475 00:06:45.475 real 0m1.453s 00:06:45.475 user 0m1.243s 00:06:45.475 sys 0m0.116s 00:06:45.475 21:46:50 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.475 21:46:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 ************************************ 00:06:45.475 END TEST accel_fill 00:06:45.475 ************************************ 00:06:45.475 21:46:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:45.475 21:46:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:45.475 21:46:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.475 21:46:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 ************************************ 00:06:45.475 START TEST accel_copy_crc32c 00:06:45.475 ************************************ 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.475 21:46:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.475 [2024-07-24 21:46:50.838879] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:45.475 [2024-07-24 21:46:50.838966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73541 ] 00:06:45.475 [2024-07-24 21:46:50.979862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.475 [2024-07-24 21:46:51.070913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.475 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.476 21:46:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.858 00:06:46.858 real 0m1.465s 00:06:46.858 user 0m1.257s 00:06:46.858 sys 0m0.114s 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.858 ************************************ 00:06:46.858 END TEST accel_copy_crc32c 00:06:46.858 ************************************ 00:06:46.858 21:46:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:46.858 21:46:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.858 21:46:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.858 21:46:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.858 21:46:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.858 ************************************ 00:06:46.858 START TEST accel_copy_crc32c_C2 00:06:46.858 ************************************ 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.858 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:46.858 [2024-07-24 21:46:52.352905] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:46.858 [2024-07-24 21:46:52.352990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73575 ] 00:06:46.858 [2024-07-24 21:46:52.490194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.858 [2024-07-24 21:46:52.558698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.123 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.124 21:46:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 ************************************ 00:06:48.058 END TEST accel_copy_crc32c_C2 00:06:48.058 ************************************ 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.058 00:06:48.058 real 0m1.445s 00:06:48.058 user 0m1.240s 00:06:48.058 sys 0m0.114s 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.058 21:46:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 21:46:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:48.317 21:46:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.317 21:46:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.317 21:46:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 ************************************ 00:06:48.317 START TEST accel_dualcast 00:06:48.317 ************************************ 00:06:48.317 21:46:53 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:48.317 21:46:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:48.317 [2024-07-24 21:46:53.856493] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:48.317 [2024-07-24 21:46:53.856594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73610 ] 00:06:48.317 [2024-07-24 21:46:53.990613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.575 [2024-07-24 21:46:54.080257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.575 21:46:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 ************************************ 00:06:49.950 END TEST accel_dualcast 00:06:49.950 ************************************ 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:49.950 21:46:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.950 00:06:49.950 real 0m1.459s 00:06:49.950 user 0m1.254s 00:06:49.950 sys 0m0.111s 00:06:49.950 21:46:55 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.950 21:46:55 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:49.950 21:46:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:49.950 21:46:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:49.950 21:46:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.950 21:46:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.950 ************************************ 00:06:49.950 START TEST accel_compare 00:06:49.950 ************************************ 00:06:49.950 21:46:55 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:49.950 21:46:55 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:49.950 21:46:55 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:49.950 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:49.951 [2024-07-24 21:46:55.367449] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:49.951 [2024-07-24 21:46:55.367534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:06:49.951 [2024-07-24 21:46:55.507139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.951 [2024-07-24 21:46:55.581951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.951 21:46:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.327 21:46:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:51.328 21:46:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.328 00:06:51.328 real 0m1.450s 00:06:51.328 user 0m1.238s 00:06:51.328 sys 0m0.118s 00:06:51.328 21:46:56 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.328 21:46:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:51.328 ************************************ 00:06:51.328 END TEST accel_compare 00:06:51.328 ************************************ 00:06:51.328 21:46:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.328 21:46:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.328 21:46:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.328 21:46:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.328 ************************************ 00:06:51.328 START TEST accel_xor 00:06:51.328 ************************************ 00:06:51.328 21:46:56 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.328 21:46:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.328 [2024-07-24 21:46:56.866180] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:51.328 [2024-07-24 21:46:56.866260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73679 ] 00:06:51.328 [2024-07-24 21:46:57.005505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.586 [2024-07-24 21:46:57.093697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.586 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.587 21:46:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 ************************************ 00:06:52.966 END TEST accel_xor 00:06:52.966 ************************************ 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.966 00:06:52.966 real 0m1.463s 00:06:52.966 user 0m1.248s 00:06:52.966 sys 0m0.123s 00:06:52.966 21:46:58 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.966 21:46:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:52.966 21:46:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.966 21:46:58 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:52.966 21:46:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.966 21:46:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.966 ************************************ 00:06:52.966 START TEST accel_xor 00:06:52.966 ************************************ 00:06:52.966 21:46:58 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:52.966 [2024-07-24 21:46:58.385267] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:52.966 [2024-07-24 21:46:58.385353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73708 ] 00:06:52.966 [2024-07-24 21:46:58.519521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.966 [2024-07-24 21:46:58.611611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.966 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.225 21:46:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.161 ************************************ 00:06:54.161 END TEST accel_xor 00:06:54.161 ************************************ 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:54.161 21:46:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.161 00:06:54.161 real 0m1.479s 00:06:54.161 user 0m1.272s 00:06:54.161 sys 0m0.110s 00:06:54.161 21:46:59 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.161 21:46:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:54.420 21:46:59 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.420 21:46:59 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:54.420 21:46:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.420 21:46:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.420 ************************************ 00:06:54.420 START TEST accel_dif_verify 00:06:54.420 ************************************ 00:06:54.420 21:46:59 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.420 21:46:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.420 [2024-07-24 21:46:59.914923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:54.420 [2024-07-24 21:46:59.915035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73748 ] 00:06:54.420 [2024-07-24 21:47:00.055812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.679 [2024-07-24 21:47:00.150082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.679 21:47:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 ************************************ 00:06:56.055 END TEST accel_dif_verify 00:06:56.055 ************************************ 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:56.055 21:47:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.055 00:06:56.055 real 0m1.469s 00:06:56.055 user 0m1.255s 00:06:56.055 sys 0m0.123s 00:06:56.055 21:47:01 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.055 21:47:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.055 21:47:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.055 21:47:01 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:56.055 21:47:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.055 21:47:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.055 ************************************ 00:06:56.055 START TEST accel_dif_generate 00:06:56.055 ************************************ 00:06:56.055 21:47:01 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:56.055 [2024-07-24 21:47:01.438596] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.055 [2024-07-24 21:47:01.438713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73777 ] 00:06:56.055 [2024-07-24 21:47:01.574114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.055 [2024-07-24 21:47:01.661125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.055 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.056 21:47:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 ************************************ 00:06:57.430 END TEST accel_dif_generate 00:06:57.430 ************************************ 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:57.430 21:47:02 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.430 00:06:57.430 real 0m1.464s 00:06:57.430 user 0m1.253s 00:06:57.430 sys 0m0.118s 00:06:57.430 21:47:02 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.430 21:47:02 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:57.430 21:47:02 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.430 21:47:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:57.430 21:47:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.430 21:47:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.430 ************************************ 00:06:57.430 START TEST accel_dif_generate_copy 00:06:57.430 ************************************ 00:06:57.430 21:47:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.430 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:57.430 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:57.430 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:57.431 21:47:02 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:57.431 [2024-07-24 21:47:02.950375] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:57.431 [2024-07-24 21:47:02.950466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:06:57.431 [2024-07-24 21:47:03.087131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.690 [2024-07-24 21:47:03.185275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.690 21:47:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.065 00:06:59.065 real 0m1.474s 00:06:59.065 user 0m1.258s 00:06:59.065 sys 0m0.119s 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.065 21:47:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.065 ************************************ 00:06:59.065 END TEST accel_dif_generate_copy 00:06:59.065 ************************************ 00:06:59.065 21:47:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:59.065 21:47:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.065 21:47:04 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:59.065 21:47:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.065 21:47:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.065 ************************************ 00:06:59.065 START TEST accel_comp 00:06:59.065 ************************************ 00:06:59.065 21:47:04 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.065 21:47:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:59.065 21:47:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:59.065 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:59.066 [2024-07-24 21:47:04.473725] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.066 [2024-07-24 21:47:04.473835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:06:59.066 [2024-07-24 21:47:04.609941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.066 [2024-07-24 21:47:04.707701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.066 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.324 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.325 21:47:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:00.260 21:47:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.260 00:07:00.260 real 0m1.480s 00:07:00.260 user 0m1.267s 00:07:00.260 sys 0m0.119s 00:07:00.260 21:47:05 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.260 21:47:05 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:00.260 ************************************ 00:07:00.260 END TEST accel_comp 00:07:00.260 ************************************ 00:07:00.260 21:47:05 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.260 21:47:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:00.260 21:47:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.260 21:47:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.519 ************************************ 00:07:00.519 START TEST accel_decomp 00:07:00.519 ************************************ 00:07:00.519 21:47:05 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:00.519 21:47:05 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:00.519 [2024-07-24 21:47:06.000939] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:00.519 [2024-07-24 21:47:06.001724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73886 ] 00:07:00.519 [2024-07-24 21:47:06.138617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.520 [2024-07-24 21:47:06.226640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.778 21:47:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.779 21:47:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.779 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.779 21:47:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.154 21:47:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.154 00:07:02.154 real 0m1.459s 00:07:02.154 user 0m1.245s 00:07:02.154 sys 0m0.124s 00:07:02.154 21:47:07 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.154 21:47:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:02.154 ************************************ 00:07:02.154 END TEST accel_decomp 00:07:02.154 ************************************ 00:07:02.154 21:47:07 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.154 21:47:07 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:02.154 21:47:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.154 21:47:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.154 ************************************ 00:07:02.154 START TEST accel_decmop_full 00:07:02.154 ************************************ 00:07:02.154 21:47:07 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:02.154 [2024-07-24 21:47:07.509964] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.154 [2024-07-24 21:47:07.510066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:07:02.154 [2024-07-24 21:47:07.645791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.154 [2024-07-24 21:47:07.734286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.154 21:47:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.527 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.528 ************************************ 00:07:03.528 END TEST accel_decmop_full 00:07:03.528 ************************************ 00:07:03.528 21:47:08 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.528 00:07:03.528 real 0m1.480s 00:07:03.528 user 0m1.262s 00:07:03.528 sys 0m0.122s 00:07:03.528 21:47:08 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.528 21:47:08 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:03.528 21:47:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.528 21:47:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:03.528 21:47:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.528 21:47:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.528 ************************************ 00:07:03.528 START TEST accel_decomp_mcore 00:07:03.528 ************************************ 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:03.528 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:03.528 [2024-07-24 21:47:09.038849] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.528 [2024-07-24 21:47:09.038933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73954 ] 00:07:03.528 [2024-07-24 21:47:09.174842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.786 [2024-07-24 21:47:09.246500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.786 [2024-07-24 21:47:09.246555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.786 [2024-07-24 21:47:09.246698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.786 [2024-07-24 21:47:09.246702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.786 21:47:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.174 00:07:05.174 real 0m1.470s 00:07:05.174 user 0m4.695s 00:07:05.174 sys 0m0.125s 00:07:05.174 21:47:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.174 ************************************ 00:07:05.175 END TEST accel_decomp_mcore 00:07:05.175 ************************************ 00:07:05.175 21:47:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:05.175 21:47:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.175 21:47:10 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:05.175 21:47:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.175 21:47:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.175 ************************************ 00:07:05.175 START TEST accel_decomp_full_mcore 00:07:05.175 ************************************ 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:05.175 [2024-07-24 21:47:10.565326] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:05.175 [2024-07-24 21:47:10.565414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73987 ] 00:07:05.175 [2024-07-24 21:47:10.703308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.175 [2024-07-24 21:47:10.804007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.175 [2024-07-24 21:47:10.804152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.175 [2024-07-24 21:47:10.804408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.175 [2024-07-24 21:47:10.804272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.175 21:47:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.549 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.550 00:07:06.550 real 0m1.499s 00:07:06.550 user 0m4.703s 00:07:06.550 sys 0m0.135s 00:07:06.550 ************************************ 00:07:06.550 END TEST accel_decomp_full_mcore 00:07:06.550 ************************************ 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.550 21:47:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.550 21:47:12 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.550 21:47:12 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:06.550 21:47:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.550 21:47:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.550 ************************************ 00:07:06.550 START TEST accel_decomp_mthread 00:07:06.550 ************************************ 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.550 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.550 [2024-07-24 21:47:12.107163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:06.550 [2024-07-24 21:47:12.107236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74025 ] 00:07:06.550 [2024-07-24 21:47:12.239954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.809 [2024-07-24 21:47:12.329837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 21:47:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.187 00:07:08.187 real 0m1.461s 00:07:08.187 user 0m1.262s 00:07:08.187 sys 0m0.107s 00:07:08.187 ************************************ 00:07:08.187 END TEST accel_decomp_mthread 00:07:08.187 ************************************ 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.187 21:47:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:08.187 21:47:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.187 21:47:13 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:08.187 21:47:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.187 21:47:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.187 ************************************ 00:07:08.187 START TEST accel_decomp_full_mthread 00:07:08.187 ************************************ 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.187 [2024-07-24 21:47:13.615302] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:08.187 [2024-07-24 21:47:13.615379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74059 ] 00:07:08.187 [2024-07-24 21:47:13.749142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.187 [2024-07-24 21:47:13.836064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.187 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.446 21:47:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.381 00:07:09.381 real 0m1.482s 00:07:09.381 user 0m1.281s 00:07:09.381 sys 0m0.107s 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.381 21:47:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.381 ************************************ 00:07:09.381 END TEST accel_decomp_full_mthread 00:07:09.381 ************************************ 00:07:09.643 21:47:15 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:09.643 21:47:15 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.643 21:47:15 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:09.643 21:47:15 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:09.643 21:47:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.643 21:47:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.643 21:47:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.643 21:47:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.643 21:47:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.643 21:47:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.643 21:47:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.643 21:47:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:09.643 21:47:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:09.643 ************************************ 00:07:09.643 START TEST accel_dif_functional_tests 00:07:09.643 ************************************ 00:07:09.643 21:47:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.643 [2024-07-24 21:47:15.175958] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.643 [2024-07-24 21:47:15.176054] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:07:09.643 [2024-07-24 21:47:15.309671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.905 [2024-07-24 21:47:15.399096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.905 [2024-07-24 21:47:15.399239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.905 [2024-07-24 21:47:15.399243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.905 [2024-07-24 21:47:15.452449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.905 00:07:09.905 00:07:09.905 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.905 http://cunit.sourceforge.net/ 00:07:09.905 00:07:09.905 00:07:09.905 Suite: accel_dif 00:07:09.905 Test: verify: DIF generated, GUARD check ...passed 00:07:09.905 Test: verify: DIF generated, APPTAG check ...passed 00:07:09.905 Test: verify: DIF generated, REFTAG check ...passed 00:07:09.905 Test: verify: DIF not generated, GUARD check ...passed 00:07:09.905 Test: verify: DIF not generated, APPTAG check ...passed 00:07:09.905 Test: verify: DIF not generated, REFTAG check ...passed 00:07:09.905 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:09.905 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:09.905 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:09.905 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-24 21:47:15.484051] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:09.905 [2024-07-24 21:47:15.484123] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:09.905 [2024-07-24 21:47:15.484156] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:09.905 [2024-07-24 21:47:15.484222] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:09.905 passed 00:07:09.905 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:09.905 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:09.905 Test: verify copy: DIF generated, GUARD check ...passed 00:07:09.905 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:09.905 Test: verify copy: DIF generated, REFTAG check ...[2024-07-24 21:47:15.484386] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:09.905 passed 00:07:09.905 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:09.905 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:09.906 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 21:47:15.484560] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:09.906 [2024-07-24 21:47:15.484594] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:09.906 [2024-07-24 21:47:15.484640] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:09.906 passed 00:07:09.906 Test: generate copy: DIF generated, GUARD check ...passed 00:07:09.906 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:09.906 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:09.906 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:09.906 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:09.906 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:09.906 Test: generate copy: iovecs-len validate ...passed 00:07:09.906 Test: generate copy: buffer alignment validate ...passed 00:07:09.906 00:07:09.906 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.906 suites 1 1 n/a 0 0 00:07:09.906 tests 26 26 26 0 0 00:07:09.906 asserts 115 115 115 0 n/a 00:07:09.906 00:07:09.906 Elapsed time = 0.002 seconds 00:07:09.906 [2024-07-24 21:47:15.484883] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:10.164 00:07:10.164 real 0m0.553s 00:07:10.164 user 0m0.734s 00:07:10.164 sys 0m0.152s 00:07:10.164 21:47:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.164 21:47:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:10.164 ************************************ 00:07:10.164 END TEST accel_dif_functional_tests 00:07:10.164 ************************************ 00:07:10.164 00:07:10.164 real 0m33.768s 00:07:10.164 user 0m35.543s 00:07:10.164 sys 0m3.981s 00:07:10.164 21:47:15 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.164 21:47:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.164 ************************************ 00:07:10.164 END TEST accel 00:07:10.164 ************************************ 00:07:10.164 21:47:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:10.164 21:47:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.164 21:47:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.164 21:47:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.164 ************************************ 00:07:10.164 START TEST accel_rpc 00:07:10.164 ************************************ 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:10.164 * Looking for test storage... 00:07:10.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:10.164 21:47:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:10.164 21:47:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74159 00:07:10.164 21:47:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 74159 00:07:10.164 21:47:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 74159 ']' 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.164 21:47:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.422 [2024-07-24 21:47:15.899351] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:10.422 [2024-07-24 21:47:15.899457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74159 ] 00:07:10.422 [2024-07-24 21:47:16.038517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.422 [2024-07-24 21:47:16.129731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.357 21:47:16 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.357 21:47:16 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:11.357 21:47:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:11.357 21:47:16 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:11.357 21:47:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:11.357 21:47:16 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:11.357 21:47:16 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:11.357 21:47:16 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.357 21:47:16 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.357 21:47:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.357 ************************************ 00:07:11.357 START TEST accel_assign_opcode 00:07:11.357 ************************************ 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.357 [2024-07-24 21:47:16.862375] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.357 [2024-07-24 21:47:16.870343] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.357 21:47:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.357 [2024-07-24 21:47:16.933172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.616 software 00:07:11.616 ************************************ 00:07:11.616 END TEST accel_assign_opcode 00:07:11.616 ************************************ 00:07:11.616 00:07:11.616 real 0m0.287s 00:07:11.616 user 0m0.052s 00:07:11.616 sys 0m0.012s 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.616 21:47:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.616 21:47:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 74159 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 74159 ']' 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 74159 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74159 00:07:11.616 killing process with pid 74159 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74159' 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@965 -- # kill 74159 00:07:11.616 21:47:17 accel_rpc -- common/autotest_common.sh@970 -- # wait 74159 00:07:11.874 00:07:11.874 real 0m1.828s 00:07:11.874 user 0m1.912s 00:07:11.874 sys 0m0.435s 00:07:11.874 21:47:17 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.874 21:47:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.874 ************************************ 00:07:11.874 END TEST accel_rpc 00:07:11.874 ************************************ 00:07:12.132 21:47:17 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.132 21:47:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.132 21:47:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.132 21:47:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 ************************************ 00:07:12.132 START TEST app_cmdline 00:07:12.132 ************************************ 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.132 * Looking for test storage... 00:07:12.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.132 21:47:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:12.132 21:47:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74252 00:07:12.132 21:47:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74252 00:07:12.132 21:47:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 74252 ']' 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.132 21:47:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 [2024-07-24 21:47:17.775388] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:12.132 [2024-07-24 21:47:17.775490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74252 ] 00:07:12.390 [2024-07-24 21:47:17.915084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.390 [2024-07-24 21:47:18.004681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.390 [2024-07-24 21:47:18.057820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.324 21:47:18 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.324 21:47:18 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:13.324 21:47:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:13.324 { 00:07:13.324 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:13.324 "fields": { 00:07:13.324 "major": 24, 00:07:13.324 "minor": 5, 00:07:13.324 "patch": 1, 00:07:13.324 "suffix": "-pre", 00:07:13.324 "commit": "241d0f3c9" 00:07:13.324 } 00:07:13.324 } 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:13.324 21:47:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.324 21:47:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:13.324 21:47:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.582 21:47:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:13.582 21:47:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:13.582 21:47:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.582 21:47:19 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.840 request: 00:07:13.840 { 00:07:13.840 "method": "env_dpdk_get_mem_stats", 00:07:13.840 "req_id": 1 00:07:13.840 } 00:07:13.840 Got JSON-RPC error response 00:07:13.840 response: 00:07:13.840 { 00:07:13.840 "code": -32601, 00:07:13.840 "message": "Method not found" 00:07:13.840 } 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.840 21:47:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74252 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 74252 ']' 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 74252 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74252 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74252' 00:07:13.840 killing process with pid 74252 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@965 -- # kill 74252 00:07:13.840 21:47:19 app_cmdline -- common/autotest_common.sh@970 -- # wait 74252 00:07:14.099 00:07:14.099 real 0m2.080s 00:07:14.099 user 0m2.612s 00:07:14.099 sys 0m0.468s 00:07:14.099 21:47:19 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.099 21:47:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.099 ************************************ 00:07:14.099 END TEST app_cmdline 00:07:14.099 ************************************ 00:07:14.099 21:47:19 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.099 21:47:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.099 21:47:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.099 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.099 ************************************ 00:07:14.099 START TEST version 00:07:14.099 ************************************ 00:07:14.099 21:47:19 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:14.357 * Looking for test storage... 00:07:14.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:14.357 21:47:19 version -- app/version.sh@17 -- # get_header_version major 00:07:14.357 21:47:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # cut -f2 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.357 21:47:19 version -- app/version.sh@17 -- # major=24 00:07:14.357 21:47:19 version -- app/version.sh@18 -- # get_header_version minor 00:07:14.357 21:47:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # cut -f2 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.357 21:47:19 version -- app/version.sh@18 -- # minor=5 00:07:14.357 21:47:19 version -- app/version.sh@19 -- # get_header_version patch 00:07:14.357 21:47:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # cut -f2 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.357 21:47:19 version -- app/version.sh@19 -- # patch=1 00:07:14.357 21:47:19 version -- app/version.sh@20 -- # get_header_version suffix 00:07:14.357 21:47:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # cut -f2 00:07:14.357 21:47:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:14.357 21:47:19 version -- app/version.sh@20 -- # suffix=-pre 00:07:14.357 21:47:19 version -- app/version.sh@22 -- # version=24.5 00:07:14.357 21:47:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:14.357 21:47:19 version -- app/version.sh@25 -- # version=24.5.1 00:07:14.357 21:47:19 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:14.357 21:47:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:14.357 21:47:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:14.357 21:47:19 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:14.357 21:47:19 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:14.357 00:07:14.357 real 0m0.147s 00:07:14.357 user 0m0.088s 00:07:14.357 sys 0m0.090s 00:07:14.357 21:47:19 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.357 21:47:19 version -- common/autotest_common.sh@10 -- # set +x 00:07:14.357 ************************************ 00:07:14.357 END TEST version 00:07:14.357 ************************************ 00:07:14.357 21:47:19 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:14.357 21:47:19 -- spdk/autotest.sh@198 -- # uname -s 00:07:14.357 21:47:19 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:14.357 21:47:19 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:14.357 21:47:19 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:07:14.357 21:47:19 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:07:14.357 21:47:19 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:14.357 21:47:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.357 21:47:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.357 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:07:14.357 ************************************ 00:07:14.357 START TEST spdk_dd 00:07:14.357 ************************************ 00:07:14.357 21:47:19 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:14.357 * Looking for test storage... 00:07:14.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.357 21:47:20 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.357 21:47:20 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.357 21:47:20 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.357 21:47:20 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.357 21:47:20 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.357 21:47:20 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.357 21:47:20 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.357 21:47:20 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:14.357 21:47:20 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.358 21:47:20 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.925 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.925 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.925 21:47:20 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:14.925 21:47:20 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@230 -- # local class 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@232 -- # local progif 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@233 -- # class=01 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@15 -- # local i 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@24 -- # return 0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:07:14.925 21:47:20 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:14.925 21:47:20 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@139 -- # local lib so 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:14.925 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:14.926 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:14.927 * spdk_dd linked to liburing 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:14.927 21:47:20 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:07:14.927 21:47:20 spdk_dd -- dd/common.sh@157 -- # return 0 00:07:14.927 21:47:20 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:14.927 21:47:20 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:14.927 21:47:20 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:14.927 21:47:20 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.927 21:47:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 ************************************ 00:07:14.927 START TEST spdk_dd_basic_rw 00:07:14.927 ************************************ 00:07:14.927 21:47:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:14.927 * Looking for test storage... 00:07:14.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.927 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:15.187 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:15.188 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.189 ************************************ 00:07:15.189 START TEST dd_bs_lt_native_bs 00:07:15.189 ************************************ 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.189 21:47:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:15.189 { 00:07:15.189 "subsystems": [ 00:07:15.189 { 00:07:15.189 "subsystem": "bdev", 00:07:15.189 "config": [ 00:07:15.189 { 00:07:15.189 "params": { 00:07:15.189 "trtype": "pcie", 00:07:15.189 "traddr": "0000:00:10.0", 00:07:15.189 "name": "Nvme0" 00:07:15.189 }, 00:07:15.189 "method": "bdev_nvme_attach_controller" 00:07:15.189 }, 00:07:15.189 { 00:07:15.189 "method": "bdev_wait_for_examine" 00:07:15.189 } 00:07:15.189 ] 00:07:15.189 } 00:07:15.189 ] 00:07:15.189 } 00:07:15.189 [2024-07-24 21:47:20.898072] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.189 [2024-07-24 21:47:20.898179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74572 ] 00:07:15.447 [2024-07-24 21:47:21.038749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.447 [2024-07-24 21:47:21.126407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.705 [2024-07-24 21:47:21.180972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.705 [2024-07-24 21:47:21.278641] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:15.705 [2024-07-24 21:47:21.278705] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.705 [2024-07-24 21:47:21.395510] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.986 00:07:15.986 real 0m0.635s 00:07:15.986 user 0m0.415s 00:07:15.986 sys 0m0.167s 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:15.986 ************************************ 00:07:15.986 END TEST dd_bs_lt_native_bs 00:07:15.986 ************************************ 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.986 ************************************ 00:07:15.986 START TEST dd_rw 00:07:15.986 ************************************ 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.986 21:47:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.552 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:16.552 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:16.552 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.552 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.552 { 00:07:16.552 "subsystems": [ 00:07:16.552 { 00:07:16.552 "subsystem": "bdev", 00:07:16.552 "config": [ 00:07:16.552 { 00:07:16.552 "params": { 00:07:16.552 "trtype": "pcie", 00:07:16.552 "traddr": "0000:00:10.0", 00:07:16.552 "name": "Nvme0" 00:07:16.552 }, 00:07:16.552 "method": "bdev_nvme_attach_controller" 00:07:16.552 }, 00:07:16.552 { 00:07:16.552 "method": "bdev_wait_for_examine" 00:07:16.552 } 00:07:16.552 ] 00:07:16.552 } 00:07:16.552 ] 00:07:16.552 } 00:07:16.552 [2024-07-24 21:47:22.253164] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:16.552 [2024-07-24 21:47:22.253267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74609 ] 00:07:16.810 [2024-07-24 21:47:22.390519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.810 [2024-07-24 21:47:22.483310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.068 [2024-07-24 21:47:22.540588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.326  Copying: 60/60 [kB] (average 29 MBps) 00:07:17.326 00:07:17.326 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:17.326 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.326 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.326 21:47:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.326 [2024-07-24 21:47:22.902173] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:17.327 [2024-07-24 21:47:22.902286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74622 ] 00:07:17.327 { 00:07:17.327 "subsystems": [ 00:07:17.327 { 00:07:17.327 "subsystem": "bdev", 00:07:17.327 "config": [ 00:07:17.327 { 00:07:17.327 "params": { 00:07:17.327 "trtype": "pcie", 00:07:17.327 "traddr": "0000:00:10.0", 00:07:17.327 "name": "Nvme0" 00:07:17.327 }, 00:07:17.327 "method": "bdev_nvme_attach_controller" 00:07:17.327 }, 00:07:17.327 { 00:07:17.327 "method": "bdev_wait_for_examine" 00:07:17.327 } 00:07:17.327 ] 00:07:17.327 } 00:07:17.327 ] 00:07:17.327 } 00:07:17.584 [2024-07-24 21:47:23.044004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.584 [2024-07-24 21:47:23.137085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.584 [2024-07-24 21:47:23.194734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.843  Copying: 60/60 [kB] (average 19 MBps) 00:07:17.843 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.843 21:47:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.843 [2024-07-24 21:47:23.550567] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:17.843 [2024-07-24 21:47:23.550667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74638 ] 00:07:17.843 { 00:07:17.843 "subsystems": [ 00:07:17.843 { 00:07:17.843 "subsystem": "bdev", 00:07:17.843 "config": [ 00:07:17.843 { 00:07:17.843 "params": { 00:07:17.843 "trtype": "pcie", 00:07:17.843 "traddr": "0000:00:10.0", 00:07:17.843 "name": "Nvme0" 00:07:17.843 }, 00:07:17.843 "method": "bdev_nvme_attach_controller" 00:07:17.843 }, 00:07:17.843 { 00:07:17.843 "method": "bdev_wait_for_examine" 00:07:17.843 } 00:07:17.843 ] 00:07:17.843 } 00:07:17.843 ] 00:07:17.843 } 00:07:18.102 [2024-07-24 21:47:23.685158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.102 [2024-07-24 21:47:23.771002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.360 [2024-07-24 21:47:23.824182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.619  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:18.619 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:18.619 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:19.185 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:19.185 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.185 21:47:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 [2024-07-24 21:47:24.818827] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.185 [2024-07-24 21:47:24.818927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74658 ] 00:07:19.185 { 00:07:19.185 "subsystems": [ 00:07:19.185 { 00:07:19.185 "subsystem": "bdev", 00:07:19.185 "config": [ 00:07:19.185 { 00:07:19.185 "params": { 00:07:19.185 "trtype": "pcie", 00:07:19.185 "traddr": "0000:00:10.0", 00:07:19.185 "name": "Nvme0" 00:07:19.185 }, 00:07:19.185 "method": "bdev_nvme_attach_controller" 00:07:19.185 }, 00:07:19.185 { 00:07:19.185 "method": "bdev_wait_for_examine" 00:07:19.185 } 00:07:19.185 ] 00:07:19.185 } 00:07:19.185 ] 00:07:19.185 } 00:07:19.442 [2024-07-24 21:47:24.958457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.442 [2024-07-24 21:47:25.049277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.442 [2024-07-24 21:47:25.103057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.700  Copying: 60/60 [kB] (average 58 MBps) 00:07:19.700 00:07:19.700 21:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:19.700 21:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:19.700 21:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.700 21:47:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.958 [2024-07-24 21:47:25.446873] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.958 [2024-07-24 21:47:25.446988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:07:19.958 { 00:07:19.958 "subsystems": [ 00:07:19.958 { 00:07:19.958 "subsystem": "bdev", 00:07:19.958 "config": [ 00:07:19.958 { 00:07:19.958 "params": { 00:07:19.958 "trtype": "pcie", 00:07:19.958 "traddr": "0000:00:10.0", 00:07:19.958 "name": "Nvme0" 00:07:19.958 }, 00:07:19.958 "method": "bdev_nvme_attach_controller" 00:07:19.958 }, 00:07:19.958 { 00:07:19.958 "method": "bdev_wait_for_examine" 00:07:19.958 } 00:07:19.958 ] 00:07:19.958 } 00:07:19.958 ] 00:07:19.958 } 00:07:19.958 [2024-07-24 21:47:25.585117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.216 [2024-07-24 21:47:25.678151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.216 [2024-07-24 21:47:25.732477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.475  Copying: 60/60 [kB] (average 58 MBps) 00:07:20.475 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.475 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.475 [2024-07-24 21:47:26.092807] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:20.475 [2024-07-24 21:47:26.093494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74691 ] 00:07:20.475 { 00:07:20.475 "subsystems": [ 00:07:20.475 { 00:07:20.475 "subsystem": "bdev", 00:07:20.475 "config": [ 00:07:20.475 { 00:07:20.475 "params": { 00:07:20.475 "trtype": "pcie", 00:07:20.475 "traddr": "0000:00:10.0", 00:07:20.475 "name": "Nvme0" 00:07:20.475 }, 00:07:20.475 "method": "bdev_nvme_attach_controller" 00:07:20.475 }, 00:07:20.475 { 00:07:20.475 "method": "bdev_wait_for_examine" 00:07:20.475 } 00:07:20.475 ] 00:07:20.475 } 00:07:20.475 ] 00:07:20.475 } 00:07:20.732 [2024-07-24 21:47:26.230402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.732 [2024-07-24 21:47:26.317844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.732 [2024-07-24 21:47:26.371157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.990  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:20.990 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:20.990 21:47:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.555 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:21.555 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:21.555 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.555 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.813 [2024-07-24 21:47:27.290884] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:21.813 [2024-07-24 21:47:27.290982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74710 ] 00:07:21.813 { 00:07:21.813 "subsystems": [ 00:07:21.813 { 00:07:21.813 "subsystem": "bdev", 00:07:21.813 "config": [ 00:07:21.813 { 00:07:21.813 "params": { 00:07:21.813 "trtype": "pcie", 00:07:21.813 "traddr": "0000:00:10.0", 00:07:21.813 "name": "Nvme0" 00:07:21.813 }, 00:07:21.813 "method": "bdev_nvme_attach_controller" 00:07:21.813 }, 00:07:21.813 { 00:07:21.813 "method": "bdev_wait_for_examine" 00:07:21.813 } 00:07:21.813 ] 00:07:21.813 } 00:07:21.813 ] 00:07:21.813 } 00:07:21.813 [2024-07-24 21:47:27.431772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.813 [2024-07-24 21:47:27.529748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.071 [2024-07-24 21:47:27.587646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.329  Copying: 56/56 [kB] (average 54 MBps) 00:07:22.329 00:07:22.329 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:22.329 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:22.329 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.329 21:47:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.329 { 00:07:22.329 "subsystems": [ 00:07:22.329 { 00:07:22.329 "subsystem": "bdev", 00:07:22.329 "config": [ 00:07:22.329 { 00:07:22.329 "params": { 00:07:22.329 "trtype": "pcie", 00:07:22.329 "traddr": "0000:00:10.0", 00:07:22.329 "name": "Nvme0" 00:07:22.329 }, 00:07:22.329 "method": "bdev_nvme_attach_controller" 00:07:22.329 }, 00:07:22.329 { 00:07:22.329 "method": "bdev_wait_for_examine" 00:07:22.329 } 00:07:22.329 ] 00:07:22.329 } 00:07:22.329 ] 00:07:22.329 } 00:07:22.329 [2024-07-24 21:47:27.951912] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:22.329 [2024-07-24 21:47:27.952012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74724 ] 00:07:22.588 [2024-07-24 21:47:28.090453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.588 [2024-07-24 21:47:28.182028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.588 [2024-07-24 21:47:28.235721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.845  Copying: 56/56 [kB] (average 54 MBps) 00:07:22.845 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.845 21:47:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.104 { 00:07:23.104 "subsystems": [ 00:07:23.104 { 00:07:23.104 "subsystem": "bdev", 00:07:23.104 "config": [ 00:07:23.104 { 00:07:23.104 "params": { 00:07:23.104 "trtype": "pcie", 00:07:23.104 "traddr": "0000:00:10.0", 00:07:23.104 "name": "Nvme0" 00:07:23.104 }, 00:07:23.104 "method": "bdev_nvme_attach_controller" 00:07:23.104 }, 00:07:23.104 { 00:07:23.104 "method": "bdev_wait_for_examine" 00:07:23.104 } 00:07:23.104 ] 00:07:23.104 } 00:07:23.104 ] 00:07:23.104 } 00:07:23.104 [2024-07-24 21:47:28.603008] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:23.104 [2024-07-24 21:47:28.603122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74745 ] 00:07:23.104 [2024-07-24 21:47:28.742434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.362 [2024-07-24 21:47:28.833633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.362 [2024-07-24 21:47:28.889290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.620  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:23.620 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:23.620 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.187 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:24.187 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:24.187 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.187 21:47:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.187 [2024-07-24 21:47:29.836843] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:24.187 [2024-07-24 21:47:29.836925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74764 ] 00:07:24.187 { 00:07:24.187 "subsystems": [ 00:07:24.187 { 00:07:24.187 "subsystem": "bdev", 00:07:24.187 "config": [ 00:07:24.187 { 00:07:24.187 "params": { 00:07:24.187 "trtype": "pcie", 00:07:24.187 "traddr": "0000:00:10.0", 00:07:24.187 "name": "Nvme0" 00:07:24.187 }, 00:07:24.187 "method": "bdev_nvme_attach_controller" 00:07:24.187 }, 00:07:24.187 { 00:07:24.187 "method": "bdev_wait_for_examine" 00:07:24.187 } 00:07:24.187 ] 00:07:24.187 } 00:07:24.187 ] 00:07:24.187 } 00:07:24.445 [2024-07-24 21:47:29.968236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.445 [2024-07-24 21:47:30.059775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.445 [2024-07-24 21:47:30.115013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.703  Copying: 56/56 [kB] (average 54 MBps) 00:07:24.703 00:07:24.962 21:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:24.962 21:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:24.962 21:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.962 21:47:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.962 [2024-07-24 21:47:30.473706] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:24.962 [2024-07-24 21:47:30.473819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74783 ] 00:07:24.962 { 00:07:24.962 "subsystems": [ 00:07:24.962 { 00:07:24.962 "subsystem": "bdev", 00:07:24.962 "config": [ 00:07:24.962 { 00:07:24.962 "params": { 00:07:24.962 "trtype": "pcie", 00:07:24.962 "traddr": "0000:00:10.0", 00:07:24.962 "name": "Nvme0" 00:07:24.962 }, 00:07:24.962 "method": "bdev_nvme_attach_controller" 00:07:24.962 }, 00:07:24.962 { 00:07:24.962 "method": "bdev_wait_for_examine" 00:07:24.962 } 00:07:24.962 ] 00:07:24.962 } 00:07:24.962 ] 00:07:24.962 } 00:07:24.962 [2024-07-24 21:47:30.616304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.220 [2024-07-24 21:47:30.707810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.220 [2024-07-24 21:47:30.761945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.478  Copying: 56/56 [kB] (average 54 MBps) 00:07:25.478 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.478 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.478 [2024-07-24 21:47:31.121659] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:25.478 [2024-07-24 21:47:31.121758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74793 ] 00:07:25.478 { 00:07:25.478 "subsystems": [ 00:07:25.478 { 00:07:25.478 "subsystem": "bdev", 00:07:25.479 "config": [ 00:07:25.479 { 00:07:25.479 "params": { 00:07:25.479 "trtype": "pcie", 00:07:25.479 "traddr": "0000:00:10.0", 00:07:25.479 "name": "Nvme0" 00:07:25.479 }, 00:07:25.479 "method": "bdev_nvme_attach_controller" 00:07:25.479 }, 00:07:25.479 { 00:07:25.479 "method": "bdev_wait_for_examine" 00:07:25.479 } 00:07:25.479 ] 00:07:25.479 } 00:07:25.479 ] 00:07:25.479 } 00:07:25.769 [2024-07-24 21:47:31.261170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.769 [2024-07-24 21:47:31.352405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.769 [2024-07-24 21:47:31.406528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.048  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.048 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:26.048 21:47:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.615 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:26.615 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:26.615 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.615 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.615 [2024-07-24 21:47:32.294256] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:26.615 [2024-07-24 21:47:32.294408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74817 ] 00:07:26.615 { 00:07:26.615 "subsystems": [ 00:07:26.615 { 00:07:26.615 "subsystem": "bdev", 00:07:26.615 "config": [ 00:07:26.615 { 00:07:26.615 "params": { 00:07:26.615 "trtype": "pcie", 00:07:26.615 "traddr": "0000:00:10.0", 00:07:26.615 "name": "Nvme0" 00:07:26.615 }, 00:07:26.615 "method": "bdev_nvme_attach_controller" 00:07:26.615 }, 00:07:26.615 { 00:07:26.615 "method": "bdev_wait_for_examine" 00:07:26.615 } 00:07:26.615 ] 00:07:26.615 } 00:07:26.615 ] 00:07:26.615 } 00:07:26.873 [2024-07-24 21:47:32.438907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.874 [2024-07-24 21:47:32.531450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.874 [2024-07-24 21:47:32.584358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.390  Copying: 48/48 [kB] (average 46 MBps) 00:07:27.390 00:07:27.390 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:27.390 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:27.390 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.390 21:47:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.390 [2024-07-24 21:47:32.932709] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:27.390 [2024-07-24 21:47:32.932819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74831 ] 00:07:27.390 { 00:07:27.390 "subsystems": [ 00:07:27.390 { 00:07:27.390 "subsystem": "bdev", 00:07:27.390 "config": [ 00:07:27.390 { 00:07:27.390 "params": { 00:07:27.390 "trtype": "pcie", 00:07:27.390 "traddr": "0000:00:10.0", 00:07:27.390 "name": "Nvme0" 00:07:27.390 }, 00:07:27.390 "method": "bdev_nvme_attach_controller" 00:07:27.390 }, 00:07:27.390 { 00:07:27.390 "method": "bdev_wait_for_examine" 00:07:27.390 } 00:07:27.390 ] 00:07:27.390 } 00:07:27.390 ] 00:07:27.390 } 00:07:27.390 [2024-07-24 21:47:33.070797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.649 [2024-07-24 21:47:33.161988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.649 [2024-07-24 21:47:33.215405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.908  Copying: 48/48 [kB] (average 46 MBps) 00:07:27.908 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.908 21:47:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.908 [2024-07-24 21:47:33.574508] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:27.908 [2024-07-24 21:47:33.574641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74852 ] 00:07:27.908 { 00:07:27.908 "subsystems": [ 00:07:27.908 { 00:07:27.908 "subsystem": "bdev", 00:07:27.908 "config": [ 00:07:27.908 { 00:07:27.908 "params": { 00:07:27.908 "trtype": "pcie", 00:07:27.908 "traddr": "0000:00:10.0", 00:07:27.908 "name": "Nvme0" 00:07:27.908 }, 00:07:27.908 "method": "bdev_nvme_attach_controller" 00:07:27.908 }, 00:07:27.908 { 00:07:27.908 "method": "bdev_wait_for_examine" 00:07:27.908 } 00:07:27.908 ] 00:07:27.908 } 00:07:27.908 ] 00:07:27.908 } 00:07:28.166 [2024-07-24 21:47:33.707410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.166 [2024-07-24 21:47:33.800067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.166 [2024-07-24 21:47:33.854445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.682  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.682 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.682 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.248 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:29.248 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:29.248 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.248 21:47:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.248 [2024-07-24 21:47:34.736547] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:29.248 [2024-07-24 21:47:34.737098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74871 ] 00:07:29.248 { 00:07:29.248 "subsystems": [ 00:07:29.248 { 00:07:29.248 "subsystem": "bdev", 00:07:29.248 "config": [ 00:07:29.248 { 00:07:29.248 "params": { 00:07:29.249 "trtype": "pcie", 00:07:29.249 "traddr": "0000:00:10.0", 00:07:29.249 "name": "Nvme0" 00:07:29.249 }, 00:07:29.249 "method": "bdev_nvme_attach_controller" 00:07:29.249 }, 00:07:29.249 { 00:07:29.249 "method": "bdev_wait_for_examine" 00:07:29.249 } 00:07:29.249 ] 00:07:29.249 } 00:07:29.249 ] 00:07:29.249 } 00:07:29.249 [2024-07-24 21:47:34.871673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.506 [2024-07-24 21:47:34.967412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.506 [2024-07-24 21:47:35.020571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.766  Copying: 48/48 [kB] (average 46 MBps) 00:07:29.766 00:07:29.766 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:29.766 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:29.766 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.766 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.766 [2024-07-24 21:47:35.381384] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:29.766 [2024-07-24 21:47:35.381500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74879 ] 00:07:29.766 { 00:07:29.766 "subsystems": [ 00:07:29.766 { 00:07:29.766 "subsystem": "bdev", 00:07:29.766 "config": [ 00:07:29.766 { 00:07:29.766 "params": { 00:07:29.766 "trtype": "pcie", 00:07:29.766 "traddr": "0000:00:10.0", 00:07:29.766 "name": "Nvme0" 00:07:29.766 }, 00:07:29.766 "method": "bdev_nvme_attach_controller" 00:07:29.766 }, 00:07:29.766 { 00:07:29.766 "method": "bdev_wait_for_examine" 00:07:29.766 } 00:07:29.766 ] 00:07:29.766 } 00:07:29.766 ] 00:07:29.766 } 00:07:30.024 [2024-07-24 21:47:35.522879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.024 [2024-07-24 21:47:35.616765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.024 [2024-07-24 21:47:35.670382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.283  Copying: 48/48 [kB] (average 46 MBps) 00:07:30.283 00:07:30.283 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.283 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:30.283 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:30.283 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:30.283 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.284 21:47:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.542 { 00:07:30.542 "subsystems": [ 00:07:30.542 { 00:07:30.542 "subsystem": "bdev", 00:07:30.542 "config": [ 00:07:30.542 { 00:07:30.542 "params": { 00:07:30.542 "trtype": "pcie", 00:07:30.542 "traddr": "0000:00:10.0", 00:07:30.542 "name": "Nvme0" 00:07:30.542 }, 00:07:30.542 "method": "bdev_nvme_attach_controller" 00:07:30.542 }, 00:07:30.542 { 00:07:30.542 "method": "bdev_wait_for_examine" 00:07:30.542 } 00:07:30.542 ] 00:07:30.542 } 00:07:30.542 ] 00:07:30.542 } 00:07:30.542 [2024-07-24 21:47:36.040436] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:30.542 [2024-07-24 21:47:36.040570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74900 ] 00:07:30.542 [2024-07-24 21:47:36.180299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.799 [2024-07-24 21:47:36.276076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.799 [2024-07-24 21:47:36.330831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.058  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:31.058 00:07:31.058 00:07:31.058 real 0m15.114s 00:07:31.058 user 0m11.074s 00:07:31.058 sys 0m5.419s 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.058 ************************************ 00:07:31.058 END TEST dd_rw 00:07:31.058 ************************************ 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.058 ************************************ 00:07:31.058 START TEST dd_rw_offset 00:07:31.058 ************************************ 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=419wse226of2n8wk3g36q77c1lws4fy4z5cs3ne08qolktd1ig4fyojyrtm754gl54iylwx5vik9pbp5miiqngvdlopzovyqjjyl6t2tcl1bgyn6uvyrh7ey3qqttgr7x8y4v1oyx37ymuqjsg64xun4mh7mtljp00eeqwwmxbihj5l00aeqnvolp72r7k049fqwc6ryzdz1orizist60ykv1q86xtryr8z5t3m0qrgvwocbe1usi5bnoe3czqiwlhna0yny7mtnocbylb8ctw14sj9d6pmul5kq2vgmv8mqtpshgvpabn1m9a32z0d3vsed83umllirpk5qjzlrxk0zc2yr2ep31xzwo9dcu2ds960fyukgmktvnoe0k4f1p8j1q0ymjx8b30a02aok1ufoct7ielg4cxdh33x7rw8ibevh5tqexj6lvx1k92o186wosvzje4ndqw0o7j6e4nk651kzlx3yvl9dnwxawu7wcqucypn68s6170hissgpf9aqtxnv5d36v8eio5wlywr8zmchagwehzdj7bqnotsprqpljn10grh4srmqcctf2u1c9c2wgs2fakff7go0ygnzcz3cfbce779tix0cyx9ohpt1qqbida32xapbavqhz4tboka03l7qr6ul28fd1vsf6jwteoqffrp8wyuip1pm9o7peaeeoialton6le1ytfjdtzphq94mhkqws56xri2l720zsyzbwblp87s68b82mbjfqywg82gsb1oovwr7yvaatmfynggvs7olcjoqta8cxryt8httf41667ecwmmwto6rdf9nqoko50vtt825f0p0weiofbafxweodz4w8ec08aeqy4vrt13sg7lqvfm27e34otr34ygl8v6wifob2gbfaz7sknsdibs32craxc0wiqgzrd3epwktov5isvq0egl3lzl2suwz36dv4decpwkdund69kr0c12ine0iq2pdxp3sh2vdjtzsrlvtyt96mfbyprtf87fx24hgy91rh54wwkekkw7znozoca5ley45bl6vo0vxrh1qcevdmky3ye0pjmbi5nyylrqg148tnizfgn3srvdpyfjoq496v7ipp57avfjizhwp1y3oeqecickd4d5ggfktv8b6giljr1wd2q82zfuy7lo5yc1xqs2rv3h028vp6dr0vwwco3nx4jwn0b3gucvzn63j4nhkfkjx6kbgxaupnfrk8poqj0y7h34wpobj9rucgzomd9sa7yoyqovkdzhel1hffiwkq1l4j15js8nhykq84a3fk09rakuxmvfhg5tftjath4meeg8hdtszytub4d4acyj1lcc885c7mt50kmi363sxfdrfuofajhu4gydljc5ncisqp0tdn2ovhtbrsq30kqak5b902yvx4aw4ddfv2zcru8q0hga8h8lrcktgswulef3dj13kt5c8kbxg52u8gt9wzfm3ty092ti6w1uwtdevl90rwd3lirbap0outykx0kohkh2x9z8rjh896gu6lyp6tvjdkw5pai7pksaj1uettj0vskgv00rlbgyruyasuya88lwaarxy14cppiwrzc7pxv05i3aff7n8kcqupjgtwiya0cdijbesw9kb6dto0f96wr1lmctf01evp5n9xgx9z9fx6yxuatp2ouexrpg9z2ied7gei09krfw56cc2v1m3vq1y1ve11r59noc1xs6wy7ap0mkebx9n3a6xtsu6wzy6nljpsjbtto2ndb5phzqqybu100c1i71y0lyfjli802ljgsor0tyutoh9rmzntlxowwaoe8ev0e78cbfe1ito4aki3lw46oajjfwta90ilyscmwwdk9v95nkmg505oe7wx8impp2khnwkvlqzqmuqwmb19qd8wvo58enk4234pluldzexfwdlxcny8t2iwlabt719yvlf3kmes4pjzrxy6dcquyrf2o9gnez4fbb96sfi6xg0iutbo4v4a2srpx2yigbtx3u4fea1xs2qbgv4u87ef6e70ey9xpjopenhsyvfdvha328yr84z7zsil5lz9orbio2xnzh6qydrix1qjqp0ct4uwzkmb5i2d7ce7hsgsh2vmonn4hebwgn67tmv8o1p8qgruuddt74ovogi9on6nim1firt0ltnmdddjc6w218cuxfbg3n8slom8daw41qaqciuhffyca8sjk0uh3y2dhifv1ifps2liwdnc6hv5066km7v9dyskh8pcb4yo5pj5usrr8h1wfwopyya2xvpkxj3x3hzhrp08a6beo25fmx7n8zxx293trggg0r4o7snost1vfhpxe6bjwp964d0shg28sbzplsv6wk3gst7flkiir9xv3ge0tfvzwoqzw66ucsu1asw5vcf8hydzz6vgta36e5idywnrpa2tkz0ylo857xsqrd6c5idwsqeyxcf34aj95t0jecrlnbaqs22zwgutntf3epycktofrpx4k6570o85m8ewuacfdrz3eedutmc4bimeqzkeu38f8ctvpyy1ww2yr4t9i3whpsx4zil685pvhvc1zpa4ug5y6b8b5vgd6luwylp3riybg5kf3pw2vz1bavte9nviaivjkiv5qx3tadoq9r6pn9lpshhip8q22y7abtm1abgay7vwfvf3coe12vr701eo1lo72ldkvzo76mmqvck2zod3lcenvb3mrf23ebsadh9swnpykdmow71wm9fsv0w1g12t0dm0x65k33ldryhc4vka7vfpvd9q8s6aakon9hsl5gsszxrv9wxhd2fuhzr8co2l83hkqhr4358eoy0taws8vyvss5b1viplyxwamcx0xybw6d9snel9ujeyltj5xex9cs63cm1n2irykpe5brmpdo0i8065biokh15lgrxglfkh3m5auld7geb5pjazgeus0thpl6jtw2ztpxbqrwdlb139fbhn90jvnxtpy7oyxc4kp84imk5t9vshci18ws7ukob6qcvt2ndz9wxws4dohprqe4jtsbnyvs5nu0ivacqegyftas5nak5z7yqro6fae0h4b2myvupeo59sdwf4mp7vuxt1ddwax2x0werfr2udjltsoz98fqjcc2bdtwgpi2x99xqmfj6ycpgi8od97mq5u4kjmeeq0axac3euuvuai6ll2jabq9vvxcp5jnbez2gk1xnc81q1ydkwfvv7zu9eeaelaqiru4w2wcredwu65lvquo3mbycimp1ji1iw46nifq3rw1sfxdt02i8eq63zvajialy7qyzfutp5wljnqda7jqwgmjslmksjm0rmzcx5j67ko8whfzjnp0tsz632ck0998umowl50ogn7psdnetdhy0ob9qd9x5lbuqj838toxycf4314x56lo57guv5d3bi53dxc7hx3wfw9j3xpw7hl5ddb1s61mg4ij3jun63610cb55ndmg5q6rx00tyhsdkp28weks3bajfqzgv1k3ir759oyk2z48ppf9a9o7sty0pfy5zv3z9jipnk9cy1c6p7g9c282axi9i8mab6ijgxwft8r22yjd1tqcljn9gbdx42ey30k90b5es0fyly1d8hwyynzq2xy31zruw0t7w1495bkxx7ewjo55d4n3qhuvvfijogaw5h5tnj3yigqx3tgpymihvd9q8taep417sbr00zlh4wgrbgpj4tna0ry5i4ap77hdqxqv151m14w1i3fh2e8jrrrsss27qd7ptuwb3l1am6242fd5nol8c6x5yhimhfkzniiwo39clnw75un3an018tm3f3rw1a9famhu0jsufcazt2gs70gg64gtzvwk6ksdexpmjyclk5gev2szxcomv0v07spkusc8uzguwvvewk5i6hhyr2cfunia8hgymyln0fuw7ewy6363jv3xe41q4akp0aulcgfq167ctx8ggibzpfnc5fdgix73cbb36w7rdgbm6xx8qpbim1m8k8rq5rrgj0dcbn5smw8vp0ygn2f4nwnhpggnuj6i395cgc9cjx4oea23ramvr8aqxmzi2qzwfibrfhy88oap3804gtx5 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:31.058 21:47:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:31.316 { 00:07:31.316 "subsystems": [ 00:07:31.316 { 00:07:31.316 "subsystem": "bdev", 00:07:31.316 "config": [ 00:07:31.316 { 00:07:31.316 "params": { 00:07:31.316 "trtype": "pcie", 00:07:31.316 "traddr": "0000:00:10.0", 00:07:31.316 "name": "Nvme0" 00:07:31.316 }, 00:07:31.316 "method": "bdev_nvme_attach_controller" 00:07:31.316 }, 00:07:31.316 { 00:07:31.316 "method": "bdev_wait_for_examine" 00:07:31.316 } 00:07:31.316 ] 00:07:31.316 } 00:07:31.316 ] 00:07:31.316 } 00:07:31.316 [2024-07-24 21:47:36.804531] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:31.316 [2024-07-24 21:47:36.804694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74936 ] 00:07:31.316 [2024-07-24 21:47:36.940832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.574 [2024-07-24 21:47:37.042996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.574 [2024-07-24 21:47:37.097380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.831  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:31.831 00:07:31.831 21:47:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:31.831 21:47:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:31.831 21:47:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:31.831 21:47:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:31.831 [2024-07-24 21:47:37.459475] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:31.831 [2024-07-24 21:47:37.459580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74944 ] 00:07:31.831 { 00:07:31.831 "subsystems": [ 00:07:31.831 { 00:07:31.831 "subsystem": "bdev", 00:07:31.831 "config": [ 00:07:31.831 { 00:07:31.831 "params": { 00:07:31.831 "trtype": "pcie", 00:07:31.831 "traddr": "0000:00:10.0", 00:07:31.831 "name": "Nvme0" 00:07:31.831 }, 00:07:31.832 "method": "bdev_nvme_attach_controller" 00:07:31.832 }, 00:07:31.832 { 00:07:31.832 "method": "bdev_wait_for_examine" 00:07:31.832 } 00:07:31.832 ] 00:07:31.832 } 00:07:31.832 ] 00:07:31.832 } 00:07:32.090 [2024-07-24 21:47:37.600939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.090 [2024-07-24 21:47:37.698002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.090 [2024-07-24 21:47:37.752615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.349  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:32.349 00:07:32.349 21:47:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:32.349 21:47:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 419wse226of2n8wk3g36q77c1lws4fy4z5cs3ne08qolktd1ig4fyojyrtm754gl54iylwx5vik9pbp5miiqngvdlopzovyqjjyl6t2tcl1bgyn6uvyrh7ey3qqttgr7x8y4v1oyx37ymuqjsg64xun4mh7mtljp00eeqwwmxbihj5l00aeqnvolp72r7k049fqwc6ryzdz1orizist60ykv1q86xtryr8z5t3m0qrgvwocbe1usi5bnoe3czqiwlhna0yny7mtnocbylb8ctw14sj9d6pmul5kq2vgmv8mqtpshgvpabn1m9a32z0d3vsed83umllirpk5qjzlrxk0zc2yr2ep31xzwo9dcu2ds960fyukgmktvnoe0k4f1p8j1q0ymjx8b30a02aok1ufoct7ielg4cxdh33x7rw8ibevh5tqexj6lvx1k92o186wosvzje4ndqw0o7j6e4nk651kzlx3yvl9dnwxawu7wcqucypn68s6170hissgpf9aqtxnv5d36v8eio5wlywr8zmchagwehzdj7bqnotsprqpljn10grh4srmqcctf2u1c9c2wgs2fakff7go0ygnzcz3cfbce779tix0cyx9ohpt1qqbida32xapbavqhz4tboka03l7qr6ul28fd1vsf6jwteoqffrp8wyuip1pm9o7peaeeoialton6le1ytfjdtzphq94mhkqws56xri2l720zsyzbwblp87s68b82mbjfqywg82gsb1oovwr7yvaatmfynggvs7olcjoqta8cxryt8httf41667ecwmmwto6rdf9nqoko50vtt825f0p0weiofbafxweodz4w8ec08aeqy4vrt13sg7lqvfm27e34otr34ygl8v6wifob2gbfaz7sknsdibs32craxc0wiqgzrd3epwktov5isvq0egl3lzl2suwz36dv4decpwkdund69kr0c12ine0iq2pdxp3sh2vdjtzsrlvtyt96mfbyprtf87fx24hgy91rh54wwkekkw7znozoca5ley45bl6vo0vxrh1qcevdmky3ye0pjmbi5nyylrqg148tnizfgn3srvdpyfjoq496v7ipp57avfjizhwp1y3oeqecickd4d5ggfktv8b6giljr1wd2q82zfuy7lo5yc1xqs2rv3h028vp6dr0vwwco3nx4jwn0b3gucvzn63j4nhkfkjx6kbgxaupnfrk8poqj0y7h34wpobj9rucgzomd9sa7yoyqovkdzhel1hffiwkq1l4j15js8nhykq84a3fk09rakuxmvfhg5tftjath4meeg8hdtszytub4d4acyj1lcc885c7mt50kmi363sxfdrfuofajhu4gydljc5ncisqp0tdn2ovhtbrsq30kqak5b902yvx4aw4ddfv2zcru8q0hga8h8lrcktgswulef3dj13kt5c8kbxg52u8gt9wzfm3ty092ti6w1uwtdevl90rwd3lirbap0outykx0kohkh2x9z8rjh896gu6lyp6tvjdkw5pai7pksaj1uettj0vskgv00rlbgyruyasuya88lwaarxy14cppiwrzc7pxv05i3aff7n8kcqupjgtwiya0cdijbesw9kb6dto0f96wr1lmctf01evp5n9xgx9z9fx6yxuatp2ouexrpg9z2ied7gei09krfw56cc2v1m3vq1y1ve11r59noc1xs6wy7ap0mkebx9n3a6xtsu6wzy6nljpsjbtto2ndb5phzqqybu100c1i71y0lyfjli802ljgsor0tyutoh9rmzntlxowwaoe8ev0e78cbfe1ito4aki3lw46oajjfwta90ilyscmwwdk9v95nkmg505oe7wx8impp2khnwkvlqzqmuqwmb19qd8wvo58enk4234pluldzexfwdlxcny8t2iwlabt719yvlf3kmes4pjzrxy6dcquyrf2o9gnez4fbb96sfi6xg0iutbo4v4a2srpx2yigbtx3u4fea1xs2qbgv4u87ef6e70ey9xpjopenhsyvfdvha328yr84z7zsil5lz9orbio2xnzh6qydrix1qjqp0ct4uwzkmb5i2d7ce7hsgsh2vmonn4hebwgn67tmv8o1p8qgruuddt74ovogi9on6nim1firt0ltnmdddjc6w218cuxfbg3n8slom8daw41qaqciuhffyca8sjk0uh3y2dhifv1ifps2liwdnc6hv5066km7v9dyskh8pcb4yo5pj5usrr8h1wfwopyya2xvpkxj3x3hzhrp08a6beo25fmx7n8zxx293trggg0r4o7snost1vfhpxe6bjwp964d0shg28sbzplsv6wk3gst7flkiir9xv3ge0tfvzwoqzw66ucsu1asw5vcf8hydzz6vgta36e5idywnrpa2tkz0ylo857xsqrd6c5idwsqeyxcf34aj95t0jecrlnbaqs22zwgutntf3epycktofrpx4k6570o85m8ewuacfdrz3eedutmc4bimeqzkeu38f8ctvpyy1ww2yr4t9i3whpsx4zil685pvhvc1zpa4ug5y6b8b5vgd6luwylp3riybg5kf3pw2vz1bavte9nviaivjkiv5qx3tadoq9r6pn9lpshhip8q22y7abtm1abgay7vwfvf3coe12vr701eo1lo72ldkvzo76mmqvck2zod3lcenvb3mrf23ebsadh9swnpykdmow71wm9fsv0w1g12t0dm0x65k33ldryhc4vka7vfpvd9q8s6aakon9hsl5gsszxrv9wxhd2fuhzr8co2l83hkqhr4358eoy0taws8vyvss5b1viplyxwamcx0xybw6d9snel9ujeyltj5xex9cs63cm1n2irykpe5brmpdo0i8065biokh15lgrxglfkh3m5auld7geb5pjazgeus0thpl6jtw2ztpxbqrwdlb139fbhn90jvnxtpy7oyxc4kp84imk5t9vshci18ws7ukob6qcvt2ndz9wxws4dohprqe4jtsbnyvs5nu0ivacqegyftas5nak5z7yqro6fae0h4b2myvupeo59sdwf4mp7vuxt1ddwax2x0werfr2udjltsoz98fqjcc2bdtwgpi2x99xqmfj6ycpgi8od97mq5u4kjmeeq0axac3euuvuai6ll2jabq9vvxcp5jnbez2gk1xnc81q1ydkwfvv7zu9eeaelaqiru4w2wcredwu65lvquo3mbycimp1ji1iw46nifq3rw1sfxdt02i8eq63zvajialy7qyzfutp5wljnqda7jqwgmjslmksjm0rmzcx5j67ko8whfzjnp0tsz632ck0998umowl50ogn7psdnetdhy0ob9qd9x5lbuqj838toxycf4314x56lo57guv5d3bi53dxc7hx3wfw9j3xpw7hl5ddb1s61mg4ij3jun63610cb55ndmg5q6rx00tyhsdkp28weks3bajfqzgv1k3ir759oyk2z48ppf9a9o7sty0pfy5zv3z9jipnk9cy1c6p7g9c282axi9i8mab6ijgxwft8r22yjd1tqcljn9gbdx42ey30k90b5es0fyly1d8hwyynzq2xy31zruw0t7w1495bkxx7ewjo55d4n3qhuvvfijogaw5h5tnj3yigqx3tgpymihvd9q8taep417sbr00zlh4wgrbgpj4tna0ry5i4ap77hdqxqv151m14w1i3fh2e8jrrrsss27qd7ptuwb3l1am6242fd5nol8c6x5yhimhfkzniiwo39clnw75un3an018tm3f3rw1a9famhu0jsufcazt2gs70gg64gtzvwk6ksdexpmjyclk5gev2szxcomv0v07spkusc8uzguwvvewk5i6hhyr2cfunia8hgymyln0fuw7ewy6363jv3xe41q4akp0aulcgfq167ctx8ggibzpfnc5fdgix73cbb36w7rdgbm6xx8qpbim1m8k8rq5rrgj0dcbn5smw8vp0ygn2f4nwnhpggnuj6i395cgc9cjx4oea23ramvr8aqxmzi2qzwfibrfhy88oap3804gtx5 == \4\1\9\w\s\e\2\2\6\o\f\2\n\8\w\k\3\g\3\6\q\7\7\c\1\l\w\s\4\f\y\4\z\5\c\s\3\n\e\0\8\q\o\l\k\t\d\1\i\g\4\f\y\o\j\y\r\t\m\7\5\4\g\l\5\4\i\y\l\w\x\5\v\i\k\9\p\b\p\5\m\i\i\q\n\g\v\d\l\o\p\z\o\v\y\q\j\j\y\l\6\t\2\t\c\l\1\b\g\y\n\6\u\v\y\r\h\7\e\y\3\q\q\t\t\g\r\7\x\8\y\4\v\1\o\y\x\3\7\y\m\u\q\j\s\g\6\4\x\u\n\4\m\h\7\m\t\l\j\p\0\0\e\e\q\w\w\m\x\b\i\h\j\5\l\0\0\a\e\q\n\v\o\l\p\7\2\r\7\k\0\4\9\f\q\w\c\6\r\y\z\d\z\1\o\r\i\z\i\s\t\6\0\y\k\v\1\q\8\6\x\t\r\y\r\8\z\5\t\3\m\0\q\r\g\v\w\o\c\b\e\1\u\s\i\5\b\n\o\e\3\c\z\q\i\w\l\h\n\a\0\y\n\y\7\m\t\n\o\c\b\y\l\b\8\c\t\w\1\4\s\j\9\d\6\p\m\u\l\5\k\q\2\v\g\m\v\8\m\q\t\p\s\h\g\v\p\a\b\n\1\m\9\a\3\2\z\0\d\3\v\s\e\d\8\3\u\m\l\l\i\r\p\k\5\q\j\z\l\r\x\k\0\z\c\2\y\r\2\e\p\3\1\x\z\w\o\9\d\c\u\2\d\s\9\6\0\f\y\u\k\g\m\k\t\v\n\o\e\0\k\4\f\1\p\8\j\1\q\0\y\m\j\x\8\b\3\0\a\0\2\a\o\k\1\u\f\o\c\t\7\i\e\l\g\4\c\x\d\h\3\3\x\7\r\w\8\i\b\e\v\h\5\t\q\e\x\j\6\l\v\x\1\k\9\2\o\1\8\6\w\o\s\v\z\j\e\4\n\d\q\w\0\o\7\j\6\e\4\n\k\6\5\1\k\z\l\x\3\y\v\l\9\d\n\w\x\a\w\u\7\w\c\q\u\c\y\p\n\6\8\s\6\1\7\0\h\i\s\s\g\p\f\9\a\q\t\x\n\v\5\d\3\6\v\8\e\i\o\5\w\l\y\w\r\8\z\m\c\h\a\g\w\e\h\z\d\j\7\b\q\n\o\t\s\p\r\q\p\l\j\n\1\0\g\r\h\4\s\r\m\q\c\c\t\f\2\u\1\c\9\c\2\w\g\s\2\f\a\k\f\f\7\g\o\0\y\g\n\z\c\z\3\c\f\b\c\e\7\7\9\t\i\x\0\c\y\x\9\o\h\p\t\1\q\q\b\i\d\a\3\2\x\a\p\b\a\v\q\h\z\4\t\b\o\k\a\0\3\l\7\q\r\6\u\l\2\8\f\d\1\v\s\f\6\j\w\t\e\o\q\f\f\r\p\8\w\y\u\i\p\1\p\m\9\o\7\p\e\a\e\e\o\i\a\l\t\o\n\6\l\e\1\y\t\f\j\d\t\z\p\h\q\9\4\m\h\k\q\w\s\5\6\x\r\i\2\l\7\2\0\z\s\y\z\b\w\b\l\p\8\7\s\6\8\b\8\2\m\b\j\f\q\y\w\g\8\2\g\s\b\1\o\o\v\w\r\7\y\v\a\a\t\m\f\y\n\g\g\v\s\7\o\l\c\j\o\q\t\a\8\c\x\r\y\t\8\h\t\t\f\4\1\6\6\7\e\c\w\m\m\w\t\o\6\r\d\f\9\n\q\o\k\o\5\0\v\t\t\8\2\5\f\0\p\0\w\e\i\o\f\b\a\f\x\w\e\o\d\z\4\w\8\e\c\0\8\a\e\q\y\4\v\r\t\1\3\s\g\7\l\q\v\f\m\2\7\e\3\4\o\t\r\3\4\y\g\l\8\v\6\w\i\f\o\b\2\g\b\f\a\z\7\s\k\n\s\d\i\b\s\3\2\c\r\a\x\c\0\w\i\q\g\z\r\d\3\e\p\w\k\t\o\v\5\i\s\v\q\0\e\g\l\3\l\z\l\2\s\u\w\z\3\6\d\v\4\d\e\c\p\w\k\d\u\n\d\6\9\k\r\0\c\1\2\i\n\e\0\i\q\2\p\d\x\p\3\s\h\2\v\d\j\t\z\s\r\l\v\t\y\t\9\6\m\f\b\y\p\r\t\f\8\7\f\x\2\4\h\g\y\9\1\r\h\5\4\w\w\k\e\k\k\w\7\z\n\o\z\o\c\a\5\l\e\y\4\5\b\l\6\v\o\0\v\x\r\h\1\q\c\e\v\d\m\k\y\3\y\e\0\p\j\m\b\i\5\n\y\y\l\r\q\g\1\4\8\t\n\i\z\f\g\n\3\s\r\v\d\p\y\f\j\o\q\4\9\6\v\7\i\p\p\5\7\a\v\f\j\i\z\h\w\p\1\y\3\o\e\q\e\c\i\c\k\d\4\d\5\g\g\f\k\t\v\8\b\6\g\i\l\j\r\1\w\d\2\q\8\2\z\f\u\y\7\l\o\5\y\c\1\x\q\s\2\r\v\3\h\0\2\8\v\p\6\d\r\0\v\w\w\c\o\3\n\x\4\j\w\n\0\b\3\g\u\c\v\z\n\6\3\j\4\n\h\k\f\k\j\x\6\k\b\g\x\a\u\p\n\f\r\k\8\p\o\q\j\0\y\7\h\3\4\w\p\o\b\j\9\r\u\c\g\z\o\m\d\9\s\a\7\y\o\y\q\o\v\k\d\z\h\e\l\1\h\f\f\i\w\k\q\1\l\4\j\1\5\j\s\8\n\h\y\k\q\8\4\a\3\f\k\0\9\r\a\k\u\x\m\v\f\h\g\5\t\f\t\j\a\t\h\4\m\e\e\g\8\h\d\t\s\z\y\t\u\b\4\d\4\a\c\y\j\1\l\c\c\8\8\5\c\7\m\t\5\0\k\m\i\3\6\3\s\x\f\d\r\f\u\o\f\a\j\h\u\4\g\y\d\l\j\c\5\n\c\i\s\q\p\0\t\d\n\2\o\v\h\t\b\r\s\q\3\0\k\q\a\k\5\b\9\0\2\y\v\x\4\a\w\4\d\d\f\v\2\z\c\r\u\8\q\0\h\g\a\8\h\8\l\r\c\k\t\g\s\w\u\l\e\f\3\d\j\1\3\k\t\5\c\8\k\b\x\g\5\2\u\8\g\t\9\w\z\f\m\3\t\y\0\9\2\t\i\6\w\1\u\w\t\d\e\v\l\9\0\r\w\d\3\l\i\r\b\a\p\0\o\u\t\y\k\x\0\k\o\h\k\h\2\x\9\z\8\r\j\h\8\9\6\g\u\6\l\y\p\6\t\v\j\d\k\w\5\p\a\i\7\p\k\s\a\j\1\u\e\t\t\j\0\v\s\k\g\v\0\0\r\l\b\g\y\r\u\y\a\s\u\y\a\8\8\l\w\a\a\r\x\y\1\4\c\p\p\i\w\r\z\c\7\p\x\v\0\5\i\3\a\f\f\7\n\8\k\c\q\u\p\j\g\t\w\i\y\a\0\c\d\i\j\b\e\s\w\9\k\b\6\d\t\o\0\f\9\6\w\r\1\l\m\c\t\f\0\1\e\v\p\5\n\9\x\g\x\9\z\9\f\x\6\y\x\u\a\t\p\2\o\u\e\x\r\p\g\9\z\2\i\e\d\7\g\e\i\0\9\k\r\f\w\5\6\c\c\2\v\1\m\3\v\q\1\y\1\v\e\1\1\r\5\9\n\o\c\1\x\s\6\w\y\7\a\p\0\m\k\e\b\x\9\n\3\a\6\x\t\s\u\6\w\z\y\6\n\l\j\p\s\j\b\t\t\o\2\n\d\b\5\p\h\z\q\q\y\b\u\1\0\0\c\1\i\7\1\y\0\l\y\f\j\l\i\8\0\2\l\j\g\s\o\r\0\t\y\u\t\o\h\9\r\m\z\n\t\l\x\o\w\w\a\o\e\8\e\v\0\e\7\8\c\b\f\e\1\i\t\o\4\a\k\i\3\l\w\4\6\o\a\j\j\f\w\t\a\9\0\i\l\y\s\c\m\w\w\d\k\9\v\9\5\n\k\m\g\5\0\5\o\e\7\w\x\8\i\m\p\p\2\k\h\n\w\k\v\l\q\z\q\m\u\q\w\m\b\1\9\q\d\8\w\v\o\5\8\e\n\k\4\2\3\4\p\l\u\l\d\z\e\x\f\w\d\l\x\c\n\y\8\t\2\i\w\l\a\b\t\7\1\9\y\v\l\f\3\k\m\e\s\4\p\j\z\r\x\y\6\d\c\q\u\y\r\f\2\o\9\g\n\e\z\4\f\b\b\9\6\s\f\i\6\x\g\0\i\u\t\b\o\4\v\4\a\2\s\r\p\x\2\y\i\g\b\t\x\3\u\4\f\e\a\1\x\s\2\q\b\g\v\4\u\8\7\e\f\6\e\7\0\e\y\9\x\p\j\o\p\e\n\h\s\y\v\f\d\v\h\a\3\2\8\y\r\8\4\z\7\z\s\i\l\5\l\z\9\o\r\b\i\o\2\x\n\z\h\6\q\y\d\r\i\x\1\q\j\q\p\0\c\t\4\u\w\z\k\m\b\5\i\2\d\7\c\e\7\h\s\g\s\h\2\v\m\o\n\n\4\h\e\b\w\g\n\6\7\t\m\v\8\o\1\p\8\q\g\r\u\u\d\d\t\7\4\o\v\o\g\i\9\o\n\6\n\i\m\1\f\i\r\t\0\l\t\n\m\d\d\d\j\c\6\w\2\1\8\c\u\x\f\b\g\3\n\8\s\l\o\m\8\d\a\w\4\1\q\a\q\c\i\u\h\f\f\y\c\a\8\s\j\k\0\u\h\3\y\2\d\h\i\f\v\1\i\f\p\s\2\l\i\w\d\n\c\6\h\v\5\0\6\6\k\m\7\v\9\d\y\s\k\h\8\p\c\b\4\y\o\5\p\j\5\u\s\r\r\8\h\1\w\f\w\o\p\y\y\a\2\x\v\p\k\x\j\3\x\3\h\z\h\r\p\0\8\a\6\b\e\o\2\5\f\m\x\7\n\8\z\x\x\2\9\3\t\r\g\g\g\0\r\4\o\7\s\n\o\s\t\1\v\f\h\p\x\e\6\b\j\w\p\9\6\4\d\0\s\h\g\2\8\s\b\z\p\l\s\v\6\w\k\3\g\s\t\7\f\l\k\i\i\r\9\x\v\3\g\e\0\t\f\v\z\w\o\q\z\w\6\6\u\c\s\u\1\a\s\w\5\v\c\f\8\h\y\d\z\z\6\v\g\t\a\3\6\e\5\i\d\y\w\n\r\p\a\2\t\k\z\0\y\l\o\8\5\7\x\s\q\r\d\6\c\5\i\d\w\s\q\e\y\x\c\f\3\4\a\j\9\5\t\0\j\e\c\r\l\n\b\a\q\s\2\2\z\w\g\u\t\n\t\f\3\e\p\y\c\k\t\o\f\r\p\x\4\k\6\5\7\0\o\8\5\m\8\e\w\u\a\c\f\d\r\z\3\e\e\d\u\t\m\c\4\b\i\m\e\q\z\k\e\u\3\8\f\8\c\t\v\p\y\y\1\w\w\2\y\r\4\t\9\i\3\w\h\p\s\x\4\z\i\l\6\8\5\p\v\h\v\c\1\z\p\a\4\u\g\5\y\6\b\8\b\5\v\g\d\6\l\u\w\y\l\p\3\r\i\y\b\g\5\k\f\3\p\w\2\v\z\1\b\a\v\t\e\9\n\v\i\a\i\v\j\k\i\v\5\q\x\3\t\a\d\o\q\9\r\6\p\n\9\l\p\s\h\h\i\p\8\q\2\2\y\7\a\b\t\m\1\a\b\g\a\y\7\v\w\f\v\f\3\c\o\e\1\2\v\r\7\0\1\e\o\1\l\o\7\2\l\d\k\v\z\o\7\6\m\m\q\v\c\k\2\z\o\d\3\l\c\e\n\v\b\3\m\r\f\2\3\e\b\s\a\d\h\9\s\w\n\p\y\k\d\m\o\w\7\1\w\m\9\f\s\v\0\w\1\g\1\2\t\0\d\m\0\x\6\5\k\3\3\l\d\r\y\h\c\4\v\k\a\7\v\f\p\v\d\9\q\8\s\6\a\a\k\o\n\9\h\s\l\5\g\s\s\z\x\r\v\9\w\x\h\d\2\f\u\h\z\r\8\c\o\2\l\8\3\h\k\q\h\r\4\3\5\8\e\o\y\0\t\a\w\s\8\v\y\v\s\s\5\b\1\v\i\p\l\y\x\w\a\m\c\x\0\x\y\b\w\6\d\9\s\n\e\l\9\u\j\e\y\l\t\j\5\x\e\x\9\c\s\6\3\c\m\1\n\2\i\r\y\k\p\e\5\b\r\m\p\d\o\0\i\8\0\6\5\b\i\o\k\h\1\5\l\g\r\x\g\l\f\k\h\3\m\5\a\u\l\d\7\g\e\b\5\p\j\a\z\g\e\u\s\0\t\h\p\l\6\j\t\w\2\z\t\p\x\b\q\r\w\d\l\b\1\3\9\f\b\h\n\9\0\j\v\n\x\t\p\y\7\o\y\x\c\4\k\p\8\4\i\m\k\5\t\9\v\s\h\c\i\1\8\w\s\7\u\k\o\b\6\q\c\v\t\2\n\d\z\9\w\x\w\s\4\d\o\h\p\r\q\e\4\j\t\s\b\n\y\v\s\5\n\u\0\i\v\a\c\q\e\g\y\f\t\a\s\5\n\a\k\5\z\7\y\q\r\o\6\f\a\e\0\h\4\b\2\m\y\v\u\p\e\o\5\9\s\d\w\f\4\m\p\7\v\u\x\t\1\d\d\w\a\x\2\x\0\w\e\r\f\r\2\u\d\j\l\t\s\o\z\9\8\f\q\j\c\c\2\b\d\t\w\g\p\i\2\x\9\9\x\q\m\f\j\6\y\c\p\g\i\8\o\d\9\7\m\q\5\u\4\k\j\m\e\e\q\0\a\x\a\c\3\e\u\u\v\u\a\i\6\l\l\2\j\a\b\q\9\v\v\x\c\p\5\j\n\b\e\z\2\g\k\1\x\n\c\8\1\q\1\y\d\k\w\f\v\v\7\z\u\9\e\e\a\e\l\a\q\i\r\u\4\w\2\w\c\r\e\d\w\u\6\5\l\v\q\u\o\3\m\b\y\c\i\m\p\1\j\i\1\i\w\4\6\n\i\f\q\3\r\w\1\s\f\x\d\t\0\2\i\8\e\q\6\3\z\v\a\j\i\a\l\y\7\q\y\z\f\u\t\p\5\w\l\j\n\q\d\a\7\j\q\w\g\m\j\s\l\m\k\s\j\m\0\r\m\z\c\x\5\j\6\7\k\o\8\w\h\f\z\j\n\p\0\t\s\z\6\3\2\c\k\0\9\9\8\u\m\o\w\l\5\0\o\g\n\7\p\s\d\n\e\t\d\h\y\0\o\b\9\q\d\9\x\5\l\b\u\q\j\8\3\8\t\o\x\y\c\f\4\3\1\4\x\5\6\l\o\5\7\g\u\v\5\d\3\b\i\5\3\d\x\c\7\h\x\3\w\f\w\9\j\3\x\p\w\7\h\l\5\d\d\b\1\s\6\1\m\g\4\i\j\3\j\u\n\6\3\6\1\0\c\b\5\5\n\d\m\g\5\q\6\r\x\0\0\t\y\h\s\d\k\p\2\8\w\e\k\s\3\b\a\j\f\q\z\g\v\1\k\3\i\r\7\5\9\o\y\k\2\z\4\8\p\p\f\9\a\9\o\7\s\t\y\0\p\f\y\5\z\v\3\z\9\j\i\p\n\k\9\c\y\1\c\6\p\7\g\9\c\2\8\2\a\x\i\9\i\8\m\a\b\6\i\j\g\x\w\f\t\8\r\2\2\y\j\d\1\t\q\c\l\j\n\9\g\b\d\x\4\2\e\y\3\0\k\9\0\b\5\e\s\0\f\y\l\y\1\d\8\h\w\y\y\n\z\q\2\x\y\3\1\z\r\u\w\0\t\7\w\1\4\9\5\b\k\x\x\7\e\w\j\o\5\5\d\4\n\3\q\h\u\v\v\f\i\j\o\g\a\w\5\h\5\t\n\j\3\y\i\g\q\x\3\t\g\p\y\m\i\h\v\d\9\q\8\t\a\e\p\4\1\7\s\b\r\0\0\z\l\h\4\w\g\r\b\g\p\j\4\t\n\a\0\r\y\5\i\4\a\p\7\7\h\d\q\x\q\v\1\5\1\m\1\4\w\1\i\3\f\h\2\e\8\j\r\r\r\s\s\s\2\7\q\d\7\p\t\u\w\b\3\l\1\a\m\6\2\4\2\f\d\5\n\o\l\8\c\6\x\5\y\h\i\m\h\f\k\z\n\i\i\w\o\3\9\c\l\n\w\7\5\u\n\3\a\n\0\1\8\t\m\3\f\3\r\w\1\a\9\f\a\m\h\u\0\j\s\u\f\c\a\z\t\2\g\s\7\0\g\g\6\4\g\t\z\v\w\k\6\k\s\d\e\x\p\m\j\y\c\l\k\5\g\e\v\2\s\z\x\c\o\m\v\0\v\0\7\s\p\k\u\s\c\8\u\z\g\u\w\v\v\e\w\k\5\i\6\h\h\y\r\2\c\f\u\n\i\a\8\h\g\y\m\y\l\n\0\f\u\w\7\e\w\y\6\3\6\3\j\v\3\x\e\4\1\q\4\a\k\p\0\a\u\l\c\g\f\q\1\6\7\c\t\x\8\g\g\i\b\z\p\f\n\c\5\f\d\g\i\x\7\3\c\b\b\3\6\w\7\r\d\g\b\m\6\x\x\8\q\p\b\i\m\1\m\8\k\8\r\q\5\r\r\g\j\0\d\c\b\n\5\s\m\w\8\v\p\0\y\g\n\2\f\4\n\w\n\h\p\g\g\n\u\j\6\i\3\9\5\c\g\c\9\c\j\x\4\o\e\a\2\3\r\a\m\v\r\8\a\q\x\m\z\i\2\q\z\w\f\i\b\r\f\h\y\8\8\o\a\p\3\8\0\4\g\t\x\5 ]] 00:07:32.349 00:07:32.349 real 0m1.366s 00:07:32.349 user 0m0.933s 00:07:32.349 sys 0m0.598s 00:07:32.349 21:47:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.349 21:47:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:32.349 ************************************ 00:07:32.350 END TEST dd_rw_offset 00:07:32.350 ************************************ 00:07:32.607 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:32.607 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.608 21:47:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.608 [2024-07-24 21:47:38.143907] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:32.608 [2024-07-24 21:47:38.144019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74979 ] 00:07:32.608 { 00:07:32.608 "subsystems": [ 00:07:32.608 { 00:07:32.608 "subsystem": "bdev", 00:07:32.608 "config": [ 00:07:32.608 { 00:07:32.608 "params": { 00:07:32.608 "trtype": "pcie", 00:07:32.608 "traddr": "0000:00:10.0", 00:07:32.608 "name": "Nvme0" 00:07:32.608 }, 00:07:32.608 "method": "bdev_nvme_attach_controller" 00:07:32.608 }, 00:07:32.608 { 00:07:32.608 "method": "bdev_wait_for_examine" 00:07:32.608 } 00:07:32.608 ] 00:07:32.608 } 00:07:32.608 ] 00:07:32.608 } 00:07:32.608 [2024-07-24 21:47:38.284472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.866 [2024-07-24 21:47:38.384558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.866 [2024-07-24 21:47:38.441414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.124  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:33.124 00:07:33.124 21:47:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.124 00:07:33.124 real 0m18.198s 00:07:33.124 user 0m13.006s 00:07:33.124 sys 0m6.668s 00:07:33.124 21:47:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.124 21:47:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.124 ************************************ 00:07:33.124 END TEST spdk_dd_basic_rw 00:07:33.124 ************************************ 00:07:33.124 21:47:38 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:33.124 21:47:38 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.124 21:47:38 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.124 21:47:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:33.124 ************************************ 00:07:33.124 START TEST spdk_dd_posix 00:07:33.124 ************************************ 00:07:33.124 21:47:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:33.382 * Looking for test storage... 00:07:33.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:33.382 * First test run, liburing in use 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.382 ************************************ 00:07:33.382 START TEST dd_flag_append 00:07:33.382 ************************************ 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:33.382 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=9qtecaqd3i02qfv0iyxxgjs3vaa5v7wl 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=d4d10ry5zrpxfxfe1ywc0in4xp8fxq1n 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 9qtecaqd3i02qfv0iyxxgjs3vaa5v7wl 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s d4d10ry5zrpxfxfe1ywc0in4xp8fxq1n 00:07:33.383 21:47:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:33.383 [2024-07-24 21:47:38.956138] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:33.383 [2024-07-24 21:47:38.956248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75043 ] 00:07:33.383 [2024-07-24 21:47:39.093321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.640 [2024-07-24 21:47:39.192841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.640 [2024-07-24 21:47:39.249205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.898  Copying: 32/32 [B] (average 31 kBps) 00:07:33.898 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ d4d10ry5zrpxfxfe1ywc0in4xp8fxq1n9qtecaqd3i02qfv0iyxxgjs3vaa5v7wl == \d\4\d\1\0\r\y\5\z\r\p\x\f\x\f\e\1\y\w\c\0\i\n\4\x\p\8\f\x\q\1\n\9\q\t\e\c\a\q\d\3\i\0\2\q\f\v\0\i\y\x\x\g\j\s\3\v\a\a\5\v\7\w\l ]] 00:07:33.898 00:07:33.898 real 0m0.599s 00:07:33.898 user 0m0.343s 00:07:33.898 sys 0m0.272s 00:07:33.898 ************************************ 00:07:33.898 END TEST dd_flag_append 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:33.898 ************************************ 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.898 21:47:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.899 ************************************ 00:07:33.899 START TEST dd_flag_directory 00:07:33.899 ************************************ 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.899 21:47:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.899 [2024-07-24 21:47:39.605336] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:33.899 [2024-07-24 21:47:39.605456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75072 ] 00:07:34.157 [2024-07-24 21:47:39.744739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.157 [2024-07-24 21:47:39.838497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.415 [2024-07-24 21:47:39.892650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.415 [2024-07-24 21:47:39.925288] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.415 [2024-07-24 21:47:39.925347] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.415 [2024-07-24 21:47:39.925362] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.415 [2024-07-24 21:47:40.037823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.415 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:34.673 [2024-07-24 21:47:40.177844] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:34.673 [2024-07-24 21:47:40.177942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75081 ] 00:07:34.673 [2024-07-24 21:47:40.307850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.930 [2024-07-24 21:47:40.406177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.930 [2024-07-24 21:47:40.460069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.930 [2024-07-24 21:47:40.492515] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.930 [2024-07-24 21:47:40.492574] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:34.930 [2024-07-24 21:47:40.492589] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.930 [2024-07-24 21:47:40.603696] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.190 00:07:35.190 real 0m1.145s 00:07:35.190 user 0m0.645s 00:07:35.190 sys 0m0.289s 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:35.190 ************************************ 00:07:35.190 END TEST dd_flag_directory 00:07:35.190 ************************************ 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.190 ************************************ 00:07:35.190 START TEST dd_flag_nofollow 00:07:35.190 ************************************ 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.190 21:47:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.190 [2024-07-24 21:47:40.796308] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:35.190 [2024-07-24 21:47:40.796422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75115 ] 00:07:35.453 [2024-07-24 21:47:40.936931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.453 [2024-07-24 21:47:41.039346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.453 [2024-07-24 21:47:41.095469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.453 [2024-07-24 21:47:41.129179] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.453 [2024-07-24 21:47:41.129242] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:35.453 [2024-07-24 21:47:41.129261] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.711 [2024-07-24 21:47:41.242582] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.711 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:35.711 [2024-07-24 21:47:41.384004] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:35.711 [2024-07-24 21:47:41.384115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75119 ] 00:07:35.968 [2024-07-24 21:47:41.524509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.968 [2024-07-24 21:47:41.626997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.968 [2024-07-24 21:47:41.684278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.226 [2024-07-24 21:47:41.718066] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.226 [2024-07-24 21:47:41.718124] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:36.226 [2024-07-24 21:47:41.718143] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.226 [2024-07-24 21:47:41.831476] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:36.226 21:47:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.484 [2024-07-24 21:47:41.965800] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:36.484 [2024-07-24 21:47:41.965899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75132 ] 00:07:36.484 [2024-07-24 21:47:42.097802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.484 [2024-07-24 21:47:42.188496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.742 [2024-07-24 21:47:42.240650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.742  Copying: 512/512 [B] (average 500 kBps) 00:07:36.742 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 6t8848j6hwx3n574yz15wm0oar5vi6ug89zk417a4o3qn4pt2eemz7tbitcvv8o0zuglgcipk6cihvvuf9w1gvrs619o5d8n55tsiefrb9qe0bywhr2nnpkb4ucstcdcd1esni2y7htrxqslmy2par7bp82be5qa24swx716i2w66hege9g9rsmzxm4y5vxx81gg03hexxcuup37veph5hv5z1aryojickfjj981l44e8wnzcu1ds9oeer52b5u7htlb8npmac65wr7df9af2q99dka54svtow3hzjm3aqa7389b8em2gak6whv51xngf182wzequgy1oquy4qesgc7xllbwais0pd5hggwytlla3chnajp3ouclga72m0ibvq4zou4x85cjckjkpurl3nrft9b6yy7uo0rt8fwinu2p6pnksj6tvhu3ex4awazai7scfp62o7i78yxb7eu861gqy0st6za7topkwxg5nicpyyhzmbm0pbzzdwnzbavt == \6\t\8\8\4\8\j\6\h\w\x\3\n\5\7\4\y\z\1\5\w\m\0\o\a\r\5\v\i\6\u\g\8\9\z\k\4\1\7\a\4\o\3\q\n\4\p\t\2\e\e\m\z\7\t\b\i\t\c\v\v\8\o\0\z\u\g\l\g\c\i\p\k\6\c\i\h\v\v\u\f\9\w\1\g\v\r\s\6\1\9\o\5\d\8\n\5\5\t\s\i\e\f\r\b\9\q\e\0\b\y\w\h\r\2\n\n\p\k\b\4\u\c\s\t\c\d\c\d\1\e\s\n\i\2\y\7\h\t\r\x\q\s\l\m\y\2\p\a\r\7\b\p\8\2\b\e\5\q\a\2\4\s\w\x\7\1\6\i\2\w\6\6\h\e\g\e\9\g\9\r\s\m\z\x\m\4\y\5\v\x\x\8\1\g\g\0\3\h\e\x\x\c\u\u\p\3\7\v\e\p\h\5\h\v\5\z\1\a\r\y\o\j\i\c\k\f\j\j\9\8\1\l\4\4\e\8\w\n\z\c\u\1\d\s\9\o\e\e\r\5\2\b\5\u\7\h\t\l\b\8\n\p\m\a\c\6\5\w\r\7\d\f\9\a\f\2\q\9\9\d\k\a\5\4\s\v\t\o\w\3\h\z\j\m\3\a\q\a\7\3\8\9\b\8\e\m\2\g\a\k\6\w\h\v\5\1\x\n\g\f\1\8\2\w\z\e\q\u\g\y\1\o\q\u\y\4\q\e\s\g\c\7\x\l\l\b\w\a\i\s\0\p\d\5\h\g\g\w\y\t\l\l\a\3\c\h\n\a\j\p\3\o\u\c\l\g\a\7\2\m\0\i\b\v\q\4\z\o\u\4\x\8\5\c\j\c\k\j\k\p\u\r\l\3\n\r\f\t\9\b\6\y\y\7\u\o\0\r\t\8\f\w\i\n\u\2\p\6\p\n\k\s\j\6\t\v\h\u\3\e\x\4\a\w\a\z\a\i\7\s\c\f\p\6\2\o\7\i\7\8\y\x\b\7\e\u\8\6\1\g\q\y\0\s\t\6\z\a\7\t\o\p\k\w\x\g\5\n\i\c\p\y\y\h\z\m\b\m\0\p\b\z\z\d\w\n\z\b\a\v\t ]] 00:07:37.000 00:07:37.000 real 0m1.729s 00:07:37.000 user 0m0.968s 00:07:37.000 sys 0m0.560s 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 ************************************ 00:07:37.000 END TEST dd_flag_nofollow 00:07:37.000 ************************************ 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 ************************************ 00:07:37.000 START TEST dd_flag_noatime 00:07:37.000 ************************************ 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721857662 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721857662 00:07:37.000 21:47:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:37.935 21:47:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.935 [2024-07-24 21:47:43.576278] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:37.935 [2024-07-24 21:47:43.576366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75169 ] 00:07:38.193 [2024-07-24 21:47:43.713119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.193 [2024-07-24 21:47:43.814622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.193 [2024-07-24 21:47:43.870666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.451  Copying: 512/512 [B] (average 500 kBps) 00:07:38.451 00:07:38.451 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.451 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721857662 )) 00:07:38.451 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.451 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721857662 )) 00:07:38.451 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.451 [2024-07-24 21:47:44.164987] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:38.451 [2024-07-24 21:47:44.165120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75188 ] 00:07:38.710 [2024-07-24 21:47:44.303522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.710 [2024-07-24 21:47:44.391324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.968 [2024-07-24 21:47:44.447920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.968  Copying: 512/512 [B] (average 500 kBps) 00:07:38.968 00:07:38.968 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.968 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721857664 )) 00:07:38.968 00:07:38.968 real 0m2.161s 00:07:38.968 user 0m0.640s 00:07:38.968 sys 0m0.559s 00:07:38.968 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.968 21:47:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:38.968 ************************************ 00:07:38.968 END TEST dd_flag_noatime 00:07:38.968 ************************************ 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.227 ************************************ 00:07:39.227 START TEST dd_flags_misc 00:07:39.227 ************************************ 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.227 21:47:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:39.227 [2024-07-24 21:47:44.775571] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:39.227 [2024-07-24 21:47:44.775698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75217 ] 00:07:39.227 [2024-07-24 21:47:44.906248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.487 [2024-07-24 21:47:44.992057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.487 [2024-07-24 21:47:45.046604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.749  Copying: 512/512 [B] (average 500 kBps) 00:07:39.749 00:07:39.749 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bmaha0omf1xah47g8a8fwnudvcdtm37zlw3s0ucw1a5asd72tzzgsegs5jvv544rc7kff425hvqyilptffc088ambynxteel9q61bo67s3uj8nxfk2jwhx85vzf9092l1fvjtf5ee4izfypvmhn3t6na9r3ejvqm7p897uubgsi9yt065352mkcogdkfpkcs20cnhue7ydp1njoso9bmzdpf8cmqkmxy7yemyabf0slgg0dyyh3pefa1yi4ct5l6nudnorujh9s78tork3f10tlhfbinzhzm76171bohvhpfqpbmul3hr9eyqr13xv34iadywyiw65whr20gcu1ceej8d0zemdkgm12a86mp1bu6a4f1d2btn0n57i5cvqeyzxycg89ii3uozj0vpiyssht75o1209a539grp0vr0muhboas88iqe3vvyyxp2s8zsnq6iqq16ya623z60ad2yvnhu1e83y4g46q97sm188nnc3cyvev5k8cofm8ogso == \8\b\m\a\h\a\0\o\m\f\1\x\a\h\4\7\g\8\a\8\f\w\n\u\d\v\c\d\t\m\3\7\z\l\w\3\s\0\u\c\w\1\a\5\a\s\d\7\2\t\z\z\g\s\e\g\s\5\j\v\v\5\4\4\r\c\7\k\f\f\4\2\5\h\v\q\y\i\l\p\t\f\f\c\0\8\8\a\m\b\y\n\x\t\e\e\l\9\q\6\1\b\o\6\7\s\3\u\j\8\n\x\f\k\2\j\w\h\x\8\5\v\z\f\9\0\9\2\l\1\f\v\j\t\f\5\e\e\4\i\z\f\y\p\v\m\h\n\3\t\6\n\a\9\r\3\e\j\v\q\m\7\p\8\9\7\u\u\b\g\s\i\9\y\t\0\6\5\3\5\2\m\k\c\o\g\d\k\f\p\k\c\s\2\0\c\n\h\u\e\7\y\d\p\1\n\j\o\s\o\9\b\m\z\d\p\f\8\c\m\q\k\m\x\y\7\y\e\m\y\a\b\f\0\s\l\g\g\0\d\y\y\h\3\p\e\f\a\1\y\i\4\c\t\5\l\6\n\u\d\n\o\r\u\j\h\9\s\7\8\t\o\r\k\3\f\1\0\t\l\h\f\b\i\n\z\h\z\m\7\6\1\7\1\b\o\h\v\h\p\f\q\p\b\m\u\l\3\h\r\9\e\y\q\r\1\3\x\v\3\4\i\a\d\y\w\y\i\w\6\5\w\h\r\2\0\g\c\u\1\c\e\e\j\8\d\0\z\e\m\d\k\g\m\1\2\a\8\6\m\p\1\b\u\6\a\4\f\1\d\2\b\t\n\0\n\5\7\i\5\c\v\q\e\y\z\x\y\c\g\8\9\i\i\3\u\o\z\j\0\v\p\i\y\s\s\h\t\7\5\o\1\2\0\9\a\5\3\9\g\r\p\0\v\r\0\m\u\h\b\o\a\s\8\8\i\q\e\3\v\v\y\y\x\p\2\s\8\z\s\n\q\6\i\q\q\1\6\y\a\6\2\3\z\6\0\a\d\2\y\v\n\h\u\1\e\8\3\y\4\g\4\6\q\9\7\s\m\1\8\8\n\n\c\3\c\y\v\e\v\5\k\8\c\o\f\m\8\o\g\s\o ]] 00:07:39.749 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.749 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:39.749 [2024-07-24 21:47:45.324292] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:39.749 [2024-07-24 21:47:45.324415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75226 ] 00:07:39.749 [2024-07-24 21:47:45.461683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.008 [2024-07-24 21:47:45.556818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.008 [2024-07-24 21:47:45.611330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.266  Copying: 512/512 [B] (average 500 kBps) 00:07:40.266 00:07:40.266 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bmaha0omf1xah47g8a8fwnudvcdtm37zlw3s0ucw1a5asd72tzzgsegs5jvv544rc7kff425hvqyilptffc088ambynxteel9q61bo67s3uj8nxfk2jwhx85vzf9092l1fvjtf5ee4izfypvmhn3t6na9r3ejvqm7p897uubgsi9yt065352mkcogdkfpkcs20cnhue7ydp1njoso9bmzdpf8cmqkmxy7yemyabf0slgg0dyyh3pefa1yi4ct5l6nudnorujh9s78tork3f10tlhfbinzhzm76171bohvhpfqpbmul3hr9eyqr13xv34iadywyiw65whr20gcu1ceej8d0zemdkgm12a86mp1bu6a4f1d2btn0n57i5cvqeyzxycg89ii3uozj0vpiyssht75o1209a539grp0vr0muhboas88iqe3vvyyxp2s8zsnq6iqq16ya623z60ad2yvnhu1e83y4g46q97sm188nnc3cyvev5k8cofm8ogso == \8\b\m\a\h\a\0\o\m\f\1\x\a\h\4\7\g\8\a\8\f\w\n\u\d\v\c\d\t\m\3\7\z\l\w\3\s\0\u\c\w\1\a\5\a\s\d\7\2\t\z\z\g\s\e\g\s\5\j\v\v\5\4\4\r\c\7\k\f\f\4\2\5\h\v\q\y\i\l\p\t\f\f\c\0\8\8\a\m\b\y\n\x\t\e\e\l\9\q\6\1\b\o\6\7\s\3\u\j\8\n\x\f\k\2\j\w\h\x\8\5\v\z\f\9\0\9\2\l\1\f\v\j\t\f\5\e\e\4\i\z\f\y\p\v\m\h\n\3\t\6\n\a\9\r\3\e\j\v\q\m\7\p\8\9\7\u\u\b\g\s\i\9\y\t\0\6\5\3\5\2\m\k\c\o\g\d\k\f\p\k\c\s\2\0\c\n\h\u\e\7\y\d\p\1\n\j\o\s\o\9\b\m\z\d\p\f\8\c\m\q\k\m\x\y\7\y\e\m\y\a\b\f\0\s\l\g\g\0\d\y\y\h\3\p\e\f\a\1\y\i\4\c\t\5\l\6\n\u\d\n\o\r\u\j\h\9\s\7\8\t\o\r\k\3\f\1\0\t\l\h\f\b\i\n\z\h\z\m\7\6\1\7\1\b\o\h\v\h\p\f\q\p\b\m\u\l\3\h\r\9\e\y\q\r\1\3\x\v\3\4\i\a\d\y\w\y\i\w\6\5\w\h\r\2\0\g\c\u\1\c\e\e\j\8\d\0\z\e\m\d\k\g\m\1\2\a\8\6\m\p\1\b\u\6\a\4\f\1\d\2\b\t\n\0\n\5\7\i\5\c\v\q\e\y\z\x\y\c\g\8\9\i\i\3\u\o\z\j\0\v\p\i\y\s\s\h\t\7\5\o\1\2\0\9\a\5\3\9\g\r\p\0\v\r\0\m\u\h\b\o\a\s\8\8\i\q\e\3\v\v\y\y\x\p\2\s\8\z\s\n\q\6\i\q\q\1\6\y\a\6\2\3\z\6\0\a\d\2\y\v\n\h\u\1\e\8\3\y\4\g\4\6\q\9\7\s\m\1\8\8\n\n\c\3\c\y\v\e\v\5\k\8\c\o\f\m\8\o\g\s\o ]] 00:07:40.266 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.266 21:47:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:40.266 [2024-07-24 21:47:45.895908] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:40.266 [2024-07-24 21:47:45.896005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75236 ] 00:07:40.525 [2024-07-24 21:47:46.035075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.525 [2024-07-24 21:47:46.126022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.525 [2024-07-24 21:47:46.180564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.784  Copying: 512/512 [B] (average 166 kBps) 00:07:40.784 00:07:40.784 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bmaha0omf1xah47g8a8fwnudvcdtm37zlw3s0ucw1a5asd72tzzgsegs5jvv544rc7kff425hvqyilptffc088ambynxteel9q61bo67s3uj8nxfk2jwhx85vzf9092l1fvjtf5ee4izfypvmhn3t6na9r3ejvqm7p897uubgsi9yt065352mkcogdkfpkcs20cnhue7ydp1njoso9bmzdpf8cmqkmxy7yemyabf0slgg0dyyh3pefa1yi4ct5l6nudnorujh9s78tork3f10tlhfbinzhzm76171bohvhpfqpbmul3hr9eyqr13xv34iadywyiw65whr20gcu1ceej8d0zemdkgm12a86mp1bu6a4f1d2btn0n57i5cvqeyzxycg89ii3uozj0vpiyssht75o1209a539grp0vr0muhboas88iqe3vvyyxp2s8zsnq6iqq16ya623z60ad2yvnhu1e83y4g46q97sm188nnc3cyvev5k8cofm8ogso == \8\b\m\a\h\a\0\o\m\f\1\x\a\h\4\7\g\8\a\8\f\w\n\u\d\v\c\d\t\m\3\7\z\l\w\3\s\0\u\c\w\1\a\5\a\s\d\7\2\t\z\z\g\s\e\g\s\5\j\v\v\5\4\4\r\c\7\k\f\f\4\2\5\h\v\q\y\i\l\p\t\f\f\c\0\8\8\a\m\b\y\n\x\t\e\e\l\9\q\6\1\b\o\6\7\s\3\u\j\8\n\x\f\k\2\j\w\h\x\8\5\v\z\f\9\0\9\2\l\1\f\v\j\t\f\5\e\e\4\i\z\f\y\p\v\m\h\n\3\t\6\n\a\9\r\3\e\j\v\q\m\7\p\8\9\7\u\u\b\g\s\i\9\y\t\0\6\5\3\5\2\m\k\c\o\g\d\k\f\p\k\c\s\2\0\c\n\h\u\e\7\y\d\p\1\n\j\o\s\o\9\b\m\z\d\p\f\8\c\m\q\k\m\x\y\7\y\e\m\y\a\b\f\0\s\l\g\g\0\d\y\y\h\3\p\e\f\a\1\y\i\4\c\t\5\l\6\n\u\d\n\o\r\u\j\h\9\s\7\8\t\o\r\k\3\f\1\0\t\l\h\f\b\i\n\z\h\z\m\7\6\1\7\1\b\o\h\v\h\p\f\q\p\b\m\u\l\3\h\r\9\e\y\q\r\1\3\x\v\3\4\i\a\d\y\w\y\i\w\6\5\w\h\r\2\0\g\c\u\1\c\e\e\j\8\d\0\z\e\m\d\k\g\m\1\2\a\8\6\m\p\1\b\u\6\a\4\f\1\d\2\b\t\n\0\n\5\7\i\5\c\v\q\e\y\z\x\y\c\g\8\9\i\i\3\u\o\z\j\0\v\p\i\y\s\s\h\t\7\5\o\1\2\0\9\a\5\3\9\g\r\p\0\v\r\0\m\u\h\b\o\a\s\8\8\i\q\e\3\v\v\y\y\x\p\2\s\8\z\s\n\q\6\i\q\q\1\6\y\a\6\2\3\z\6\0\a\d\2\y\v\n\h\u\1\e\8\3\y\4\g\4\6\q\9\7\s\m\1\8\8\n\n\c\3\c\y\v\e\v\5\k\8\c\o\f\m\8\o\g\s\o ]] 00:07:40.784 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.784 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:40.784 [2024-07-24 21:47:46.468345] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:40.784 [2024-07-24 21:47:46.468456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75245 ] 00:07:41.042 [2024-07-24 21:47:46.607952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.042 [2024-07-24 21:47:46.706027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.301 [2024-07-24 21:47:46.760418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.301  Copying: 512/512 [B] (average 250 kBps) 00:07:41.301 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bmaha0omf1xah47g8a8fwnudvcdtm37zlw3s0ucw1a5asd72tzzgsegs5jvv544rc7kff425hvqyilptffc088ambynxteel9q61bo67s3uj8nxfk2jwhx85vzf9092l1fvjtf5ee4izfypvmhn3t6na9r3ejvqm7p897uubgsi9yt065352mkcogdkfpkcs20cnhue7ydp1njoso9bmzdpf8cmqkmxy7yemyabf0slgg0dyyh3pefa1yi4ct5l6nudnorujh9s78tork3f10tlhfbinzhzm76171bohvhpfqpbmul3hr9eyqr13xv34iadywyiw65whr20gcu1ceej8d0zemdkgm12a86mp1bu6a4f1d2btn0n57i5cvqeyzxycg89ii3uozj0vpiyssht75o1209a539grp0vr0muhboas88iqe3vvyyxp2s8zsnq6iqq16ya623z60ad2yvnhu1e83y4g46q97sm188nnc3cyvev5k8cofm8ogso == \8\b\m\a\h\a\0\o\m\f\1\x\a\h\4\7\g\8\a\8\f\w\n\u\d\v\c\d\t\m\3\7\z\l\w\3\s\0\u\c\w\1\a\5\a\s\d\7\2\t\z\z\g\s\e\g\s\5\j\v\v\5\4\4\r\c\7\k\f\f\4\2\5\h\v\q\y\i\l\p\t\f\f\c\0\8\8\a\m\b\y\n\x\t\e\e\l\9\q\6\1\b\o\6\7\s\3\u\j\8\n\x\f\k\2\j\w\h\x\8\5\v\z\f\9\0\9\2\l\1\f\v\j\t\f\5\e\e\4\i\z\f\y\p\v\m\h\n\3\t\6\n\a\9\r\3\e\j\v\q\m\7\p\8\9\7\u\u\b\g\s\i\9\y\t\0\6\5\3\5\2\m\k\c\o\g\d\k\f\p\k\c\s\2\0\c\n\h\u\e\7\y\d\p\1\n\j\o\s\o\9\b\m\z\d\p\f\8\c\m\q\k\m\x\y\7\y\e\m\y\a\b\f\0\s\l\g\g\0\d\y\y\h\3\p\e\f\a\1\y\i\4\c\t\5\l\6\n\u\d\n\o\r\u\j\h\9\s\7\8\t\o\r\k\3\f\1\0\t\l\h\f\b\i\n\z\h\z\m\7\6\1\7\1\b\o\h\v\h\p\f\q\p\b\m\u\l\3\h\r\9\e\y\q\r\1\3\x\v\3\4\i\a\d\y\w\y\i\w\6\5\w\h\r\2\0\g\c\u\1\c\e\e\j\8\d\0\z\e\m\d\k\g\m\1\2\a\8\6\m\p\1\b\u\6\a\4\f\1\d\2\b\t\n\0\n\5\7\i\5\c\v\q\e\y\z\x\y\c\g\8\9\i\i\3\u\o\z\j\0\v\p\i\y\s\s\h\t\7\5\o\1\2\0\9\a\5\3\9\g\r\p\0\v\r\0\m\u\h\b\o\a\s\8\8\i\q\e\3\v\v\y\y\x\p\2\s\8\z\s\n\q\6\i\q\q\1\6\y\a\6\2\3\z\6\0\a\d\2\y\v\n\h\u\1\e\8\3\y\4\g\4\6\q\9\7\s\m\1\8\8\n\n\c\3\c\y\v\e\v\5\k\8\c\o\f\m\8\o\g\s\o ]] 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:41.301 21:47:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:41.559 [2024-07-24 21:47:47.068805] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:41.559 [2024-07-24 21:47:47.068952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75260 ] 00:07:41.559 [2024-07-24 21:47:47.216901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.818 [2024-07-24 21:47:47.309316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.818 [2024-07-24 21:47:47.363884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.077  Copying: 512/512 [B] (average 500 kBps) 00:07:42.077 00:07:42.077 21:47:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ i3x7yjpp6q7vlpypy0c76ugdmt8j3rooz59yn0brlxytkvs8pkbapg0yr0gqspfj4mqa74pswtglrdml0fwyxmj9o9o5erc73zlrv4mffz9g4g0897gjxwtbnu2kwdiap875iamwqnscijtyn4j9da2e460n8roj48uz6k1qofnu3jna3e3xhy87zjd0pjfpvx4e7xzki6b3w6wkkxacocxh65jb3n6uzf51sujppnm1k2mnnrq447zorbnmh5ak5rbb92ukz729cav25k2xydbsh9uayrfw3twrme1gsd6bkteqmbbbbtb6vzwvef83he4a1vh314wk59oaqc6es6xqm8ohtbe1sfyybzzyjndarsumwgwl5e9lxim7vlo4a91jzpfwe13jhz82yv5s5lr74x87phq2elmp9l8mamvy46r8ya0ycpght7apx93ngb92thp87zcur5tppcarhtl0d4rvmau2v8nv4d42l6paayqsf2idubiz0hsulmqf == \i\3\x\7\y\j\p\p\6\q\7\v\l\p\y\p\y\0\c\7\6\u\g\d\m\t\8\j\3\r\o\o\z\5\9\y\n\0\b\r\l\x\y\t\k\v\s\8\p\k\b\a\p\g\0\y\r\0\g\q\s\p\f\j\4\m\q\a\7\4\p\s\w\t\g\l\r\d\m\l\0\f\w\y\x\m\j\9\o\9\o\5\e\r\c\7\3\z\l\r\v\4\m\f\f\z\9\g\4\g\0\8\9\7\g\j\x\w\t\b\n\u\2\k\w\d\i\a\p\8\7\5\i\a\m\w\q\n\s\c\i\j\t\y\n\4\j\9\d\a\2\e\4\6\0\n\8\r\o\j\4\8\u\z\6\k\1\q\o\f\n\u\3\j\n\a\3\e\3\x\h\y\8\7\z\j\d\0\p\j\f\p\v\x\4\e\7\x\z\k\i\6\b\3\w\6\w\k\k\x\a\c\o\c\x\h\6\5\j\b\3\n\6\u\z\f\5\1\s\u\j\p\p\n\m\1\k\2\m\n\n\r\q\4\4\7\z\o\r\b\n\m\h\5\a\k\5\r\b\b\9\2\u\k\z\7\2\9\c\a\v\2\5\k\2\x\y\d\b\s\h\9\u\a\y\r\f\w\3\t\w\r\m\e\1\g\s\d\6\b\k\t\e\q\m\b\b\b\b\t\b\6\v\z\w\v\e\f\8\3\h\e\4\a\1\v\h\3\1\4\w\k\5\9\o\a\q\c\6\e\s\6\x\q\m\8\o\h\t\b\e\1\s\f\y\y\b\z\z\y\j\n\d\a\r\s\u\m\w\g\w\l\5\e\9\l\x\i\m\7\v\l\o\4\a\9\1\j\z\p\f\w\e\1\3\j\h\z\8\2\y\v\5\s\5\l\r\7\4\x\8\7\p\h\q\2\e\l\m\p\9\l\8\m\a\m\v\y\4\6\r\8\y\a\0\y\c\p\g\h\t\7\a\p\x\9\3\n\g\b\9\2\t\h\p\8\7\z\c\u\r\5\t\p\p\c\a\r\h\t\l\0\d\4\r\v\m\a\u\2\v\8\n\v\4\d\4\2\l\6\p\a\a\y\q\s\f\2\i\d\u\b\i\z\0\h\s\u\l\m\q\f ]] 00:07:42.077 21:47:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.077 21:47:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:42.077 [2024-07-24 21:47:47.647299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:42.077 [2024-07-24 21:47:47.647406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75264 ] 00:07:42.077 [2024-07-24 21:47:47.784859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.335 [2024-07-24 21:47:47.877915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.335 [2024-07-24 21:47:47.931230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.594  Copying: 512/512 [B] (average 500 kBps) 00:07:42.594 00:07:42.594 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ i3x7yjpp6q7vlpypy0c76ugdmt8j3rooz59yn0brlxytkvs8pkbapg0yr0gqspfj4mqa74pswtglrdml0fwyxmj9o9o5erc73zlrv4mffz9g4g0897gjxwtbnu2kwdiap875iamwqnscijtyn4j9da2e460n8roj48uz6k1qofnu3jna3e3xhy87zjd0pjfpvx4e7xzki6b3w6wkkxacocxh65jb3n6uzf51sujppnm1k2mnnrq447zorbnmh5ak5rbb92ukz729cav25k2xydbsh9uayrfw3twrme1gsd6bkteqmbbbbtb6vzwvef83he4a1vh314wk59oaqc6es6xqm8ohtbe1sfyybzzyjndarsumwgwl5e9lxim7vlo4a91jzpfwe13jhz82yv5s5lr74x87phq2elmp9l8mamvy46r8ya0ycpght7apx93ngb92thp87zcur5tppcarhtl0d4rvmau2v8nv4d42l6paayqsf2idubiz0hsulmqf == \i\3\x\7\y\j\p\p\6\q\7\v\l\p\y\p\y\0\c\7\6\u\g\d\m\t\8\j\3\r\o\o\z\5\9\y\n\0\b\r\l\x\y\t\k\v\s\8\p\k\b\a\p\g\0\y\r\0\g\q\s\p\f\j\4\m\q\a\7\4\p\s\w\t\g\l\r\d\m\l\0\f\w\y\x\m\j\9\o\9\o\5\e\r\c\7\3\z\l\r\v\4\m\f\f\z\9\g\4\g\0\8\9\7\g\j\x\w\t\b\n\u\2\k\w\d\i\a\p\8\7\5\i\a\m\w\q\n\s\c\i\j\t\y\n\4\j\9\d\a\2\e\4\6\0\n\8\r\o\j\4\8\u\z\6\k\1\q\o\f\n\u\3\j\n\a\3\e\3\x\h\y\8\7\z\j\d\0\p\j\f\p\v\x\4\e\7\x\z\k\i\6\b\3\w\6\w\k\k\x\a\c\o\c\x\h\6\5\j\b\3\n\6\u\z\f\5\1\s\u\j\p\p\n\m\1\k\2\m\n\n\r\q\4\4\7\z\o\r\b\n\m\h\5\a\k\5\r\b\b\9\2\u\k\z\7\2\9\c\a\v\2\5\k\2\x\y\d\b\s\h\9\u\a\y\r\f\w\3\t\w\r\m\e\1\g\s\d\6\b\k\t\e\q\m\b\b\b\b\t\b\6\v\z\w\v\e\f\8\3\h\e\4\a\1\v\h\3\1\4\w\k\5\9\o\a\q\c\6\e\s\6\x\q\m\8\o\h\t\b\e\1\s\f\y\y\b\z\z\y\j\n\d\a\r\s\u\m\w\g\w\l\5\e\9\l\x\i\m\7\v\l\o\4\a\9\1\j\z\p\f\w\e\1\3\j\h\z\8\2\y\v\5\s\5\l\r\7\4\x\8\7\p\h\q\2\e\l\m\p\9\l\8\m\a\m\v\y\4\6\r\8\y\a\0\y\c\p\g\h\t\7\a\p\x\9\3\n\g\b\9\2\t\h\p\8\7\z\c\u\r\5\t\p\p\c\a\r\h\t\l\0\d\4\r\v\m\a\u\2\v\8\n\v\4\d\4\2\l\6\p\a\a\y\q\s\f\2\i\d\u\b\i\z\0\h\s\u\l\m\q\f ]] 00:07:42.594 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.594 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:42.594 [2024-07-24 21:47:48.213911] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:42.594 [2024-07-24 21:47:48.214021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75279 ] 00:07:42.852 [2024-07-24 21:47:48.350598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.852 [2024-07-24 21:47:48.440428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.852 [2024-07-24 21:47:48.492723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.126  Copying: 512/512 [B] (average 500 kBps) 00:07:43.126 00:07:43.126 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ i3x7yjpp6q7vlpypy0c76ugdmt8j3rooz59yn0brlxytkvs8pkbapg0yr0gqspfj4mqa74pswtglrdml0fwyxmj9o9o5erc73zlrv4mffz9g4g0897gjxwtbnu2kwdiap875iamwqnscijtyn4j9da2e460n8roj48uz6k1qofnu3jna3e3xhy87zjd0pjfpvx4e7xzki6b3w6wkkxacocxh65jb3n6uzf51sujppnm1k2mnnrq447zorbnmh5ak5rbb92ukz729cav25k2xydbsh9uayrfw3twrme1gsd6bkteqmbbbbtb6vzwvef83he4a1vh314wk59oaqc6es6xqm8ohtbe1sfyybzzyjndarsumwgwl5e9lxim7vlo4a91jzpfwe13jhz82yv5s5lr74x87phq2elmp9l8mamvy46r8ya0ycpght7apx93ngb92thp87zcur5tppcarhtl0d4rvmau2v8nv4d42l6paayqsf2idubiz0hsulmqf == \i\3\x\7\y\j\p\p\6\q\7\v\l\p\y\p\y\0\c\7\6\u\g\d\m\t\8\j\3\r\o\o\z\5\9\y\n\0\b\r\l\x\y\t\k\v\s\8\p\k\b\a\p\g\0\y\r\0\g\q\s\p\f\j\4\m\q\a\7\4\p\s\w\t\g\l\r\d\m\l\0\f\w\y\x\m\j\9\o\9\o\5\e\r\c\7\3\z\l\r\v\4\m\f\f\z\9\g\4\g\0\8\9\7\g\j\x\w\t\b\n\u\2\k\w\d\i\a\p\8\7\5\i\a\m\w\q\n\s\c\i\j\t\y\n\4\j\9\d\a\2\e\4\6\0\n\8\r\o\j\4\8\u\z\6\k\1\q\o\f\n\u\3\j\n\a\3\e\3\x\h\y\8\7\z\j\d\0\p\j\f\p\v\x\4\e\7\x\z\k\i\6\b\3\w\6\w\k\k\x\a\c\o\c\x\h\6\5\j\b\3\n\6\u\z\f\5\1\s\u\j\p\p\n\m\1\k\2\m\n\n\r\q\4\4\7\z\o\r\b\n\m\h\5\a\k\5\r\b\b\9\2\u\k\z\7\2\9\c\a\v\2\5\k\2\x\y\d\b\s\h\9\u\a\y\r\f\w\3\t\w\r\m\e\1\g\s\d\6\b\k\t\e\q\m\b\b\b\b\t\b\6\v\z\w\v\e\f\8\3\h\e\4\a\1\v\h\3\1\4\w\k\5\9\o\a\q\c\6\e\s\6\x\q\m\8\o\h\t\b\e\1\s\f\y\y\b\z\z\y\j\n\d\a\r\s\u\m\w\g\w\l\5\e\9\l\x\i\m\7\v\l\o\4\a\9\1\j\z\p\f\w\e\1\3\j\h\z\8\2\y\v\5\s\5\l\r\7\4\x\8\7\p\h\q\2\e\l\m\p\9\l\8\m\a\m\v\y\4\6\r\8\y\a\0\y\c\p\g\h\t\7\a\p\x\9\3\n\g\b\9\2\t\h\p\8\7\z\c\u\r\5\t\p\p\c\a\r\h\t\l\0\d\4\r\v\m\a\u\2\v\8\n\v\4\d\4\2\l\6\p\a\a\y\q\s\f\2\i\d\u\b\i\z\0\h\s\u\l\m\q\f ]] 00:07:43.126 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.126 21:47:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:43.126 [2024-07-24 21:47:48.774743] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:43.126 [2024-07-24 21:47:48.774880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75289 ] 00:07:43.395 [2024-07-24 21:47:48.917990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.395 [2024-07-24 21:47:49.008440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.395 [2024-07-24 21:47:49.060893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.653  Copying: 512/512 [B] (average 500 kBps) 00:07:43.653 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ i3x7yjpp6q7vlpypy0c76ugdmt8j3rooz59yn0brlxytkvs8pkbapg0yr0gqspfj4mqa74pswtglrdml0fwyxmj9o9o5erc73zlrv4mffz9g4g0897gjxwtbnu2kwdiap875iamwqnscijtyn4j9da2e460n8roj48uz6k1qofnu3jna3e3xhy87zjd0pjfpvx4e7xzki6b3w6wkkxacocxh65jb3n6uzf51sujppnm1k2mnnrq447zorbnmh5ak5rbb92ukz729cav25k2xydbsh9uayrfw3twrme1gsd6bkteqmbbbbtb6vzwvef83he4a1vh314wk59oaqc6es6xqm8ohtbe1sfyybzzyjndarsumwgwl5e9lxim7vlo4a91jzpfwe13jhz82yv5s5lr74x87phq2elmp9l8mamvy46r8ya0ycpght7apx93ngb92thp87zcur5tppcarhtl0d4rvmau2v8nv4d42l6paayqsf2idubiz0hsulmqf == \i\3\x\7\y\j\p\p\6\q\7\v\l\p\y\p\y\0\c\7\6\u\g\d\m\t\8\j\3\r\o\o\z\5\9\y\n\0\b\r\l\x\y\t\k\v\s\8\p\k\b\a\p\g\0\y\r\0\g\q\s\p\f\j\4\m\q\a\7\4\p\s\w\t\g\l\r\d\m\l\0\f\w\y\x\m\j\9\o\9\o\5\e\r\c\7\3\z\l\r\v\4\m\f\f\z\9\g\4\g\0\8\9\7\g\j\x\w\t\b\n\u\2\k\w\d\i\a\p\8\7\5\i\a\m\w\q\n\s\c\i\j\t\y\n\4\j\9\d\a\2\e\4\6\0\n\8\r\o\j\4\8\u\z\6\k\1\q\o\f\n\u\3\j\n\a\3\e\3\x\h\y\8\7\z\j\d\0\p\j\f\p\v\x\4\e\7\x\z\k\i\6\b\3\w\6\w\k\k\x\a\c\o\c\x\h\6\5\j\b\3\n\6\u\z\f\5\1\s\u\j\p\p\n\m\1\k\2\m\n\n\r\q\4\4\7\z\o\r\b\n\m\h\5\a\k\5\r\b\b\9\2\u\k\z\7\2\9\c\a\v\2\5\k\2\x\y\d\b\s\h\9\u\a\y\r\f\w\3\t\w\r\m\e\1\g\s\d\6\b\k\t\e\q\m\b\b\b\b\t\b\6\v\z\w\v\e\f\8\3\h\e\4\a\1\v\h\3\1\4\w\k\5\9\o\a\q\c\6\e\s\6\x\q\m\8\o\h\t\b\e\1\s\f\y\y\b\z\z\y\j\n\d\a\r\s\u\m\w\g\w\l\5\e\9\l\x\i\m\7\v\l\o\4\a\9\1\j\z\p\f\w\e\1\3\j\h\z\8\2\y\v\5\s\5\l\r\7\4\x\8\7\p\h\q\2\e\l\m\p\9\l\8\m\a\m\v\y\4\6\r\8\y\a\0\y\c\p\g\h\t\7\a\p\x\9\3\n\g\b\9\2\t\h\p\8\7\z\c\u\r\5\t\p\p\c\a\r\h\t\l\0\d\4\r\v\m\a\u\2\v\8\n\v\4\d\4\2\l\6\p\a\a\y\q\s\f\2\i\d\u\b\i\z\0\h\s\u\l\m\q\f ]] 00:07:43.653 00:07:43.653 real 0m4.563s 00:07:43.653 user 0m2.515s 00:07:43.653 sys 0m2.166s 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 ************************************ 00:07:43.653 END TEST dd_flags_misc 00:07:43.653 ************************************ 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:43.653 * Second test run, disabling liburing, forcing AIO 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 ************************************ 00:07:43.653 START TEST dd_flag_append_forced_aio 00:07:43.653 ************************************ 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=rk1digrnd67ihakc5zry390j6xb9g9f1 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=vflcs2q9bzbfxwq06h99xjodpzemdyrz 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s rk1digrnd67ihakc5zry390j6xb9g9f1 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s vflcs2q9bzbfxwq06h99xjodpzemdyrz 00:07:43.653 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:43.935 [2024-07-24 21:47:49.388860] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:43.935 [2024-07-24 21:47:49.388972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75317 ] 00:07:43.935 [2024-07-24 21:47:49.527414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.935 [2024-07-24 21:47:49.616991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.193 [2024-07-24 21:47:49.669471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.193  Copying: 32/32 [B] (average 31 kBps) 00:07:44.193 00:07:44.193 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ vflcs2q9bzbfxwq06h99xjodpzemdyrzrk1digrnd67ihakc5zry390j6xb9g9f1 == \v\f\l\c\s\2\q\9\b\z\b\f\x\w\q\0\6\h\9\9\x\j\o\d\p\z\e\m\d\y\r\z\r\k\1\d\i\g\r\n\d\6\7\i\h\a\k\c\5\z\r\y\3\9\0\j\6\x\b\9\g\9\f\1 ]] 00:07:44.193 00:07:44.193 real 0m0.574s 00:07:44.193 user 0m0.311s 00:07:44.193 sys 0m0.143s 00:07:44.193 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.193 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:44.193 ************************************ 00:07:44.193 END TEST dd_flag_append_forced_aio 00:07:44.193 ************************************ 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:44.450 ************************************ 00:07:44.450 START TEST dd_flag_directory_forced_aio 00:07:44.450 ************************************ 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.450 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.451 21:47:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.451 [2024-07-24 21:47:50.012389] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:44.451 [2024-07-24 21:47:50.012486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75344 ] 00:07:44.451 [2024-07-24 21:47:50.145608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.708 [2024-07-24 21:47:50.235254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.708 [2024-07-24 21:47:50.287891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.708 [2024-07-24 21:47:50.317119] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:44.708 [2024-07-24 21:47:50.317186] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:44.708 [2024-07-24 21:47:50.317212] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.708 [2024-07-24 21:47:50.424222] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.967 21:47:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:44.967 [2024-07-24 21:47:50.557870] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:44.967 [2024-07-24 21:47:50.557965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75353 ] 00:07:45.225 [2024-07-24 21:47:50.693343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.225 [2024-07-24 21:47:50.784990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.225 [2024-07-24 21:47:50.837461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.225 [2024-07-24 21:47:50.868962] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:45.225 [2024-07-24 21:47:50.869027] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:45.225 [2024-07-24 21:47:50.869063] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.484 [2024-07-24 21:47:50.977950] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.484 00:07:45.484 real 0m1.108s 00:07:45.484 user 0m0.623s 00:07:45.484 sys 0m0.273s 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 ************************************ 00:07:45.484 END TEST dd_flag_directory_forced_aio 00:07:45.484 ************************************ 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 ************************************ 00:07:45.484 START TEST dd_flag_nofollow_forced_aio 00:07:45.484 ************************************ 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.484 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.484 [2024-07-24 21:47:51.177261] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:45.484 [2024-07-24 21:47:51.177399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75382 ] 00:07:45.742 [2024-07-24 21:47:51.319680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.742 [2024-07-24 21:47:51.408956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.999 [2024-07-24 21:47:51.461289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.999 [2024-07-24 21:47:51.490196] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:45.999 [2024-07-24 21:47:51.490265] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:45.999 [2024-07-24 21:47:51.490287] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.999 [2024-07-24 21:47:51.597052] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:45.999 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.000 21:47:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:46.257 [2024-07-24 21:47:51.729980] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:46.257 [2024-07-24 21:47:51.730072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75391 ] 00:07:46.257 [2024-07-24 21:47:51.865664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.257 [2024-07-24 21:47:51.955263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.515 [2024-07-24 21:47:52.007456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.515 [2024-07-24 21:47:52.036339] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:46.515 [2024-07-24 21:47:52.036400] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:46.515 [2024-07-24 21:47:52.036425] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.515 [2024-07-24 21:47:52.142825] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:46.515 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.772 [2024-07-24 21:47:52.281070] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:46.772 [2024-07-24 21:47:52.281166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75399 ] 00:07:46.772 [2024-07-24 21:47:52.421553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.030 [2024-07-24 21:47:52.517078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.030 [2024-07-24 21:47:52.569421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:47.288  Copying: 512/512 [B] (average 500 kBps) 00:07:47.288 00:07:47.288 ************************************ 00:07:47.288 END TEST dd_flag_nofollow_forced_aio 00:07:47.288 ************************************ 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ bflh99wwx8fw4e3udknhzt84a2urj61qlcoha3l2i93p88scsxw097kl7se6ji36brp44ofaj3w4ja57yzl3cha9dqpceqbql9cxhysuu0hklviskren8drcqhu1ou2nkipqvv7nozohwa34kjqrpplu98zwx7qotiex1gpohtl0rnqw0vmbs9k631ofm45kep7sle9v9lsziblp9k7p512d9p0gjxkr7iy8645ejybygnntaku94yoh843rjnl3zai3tr104mjsx8245ko48x7o5f787vhw6t5xjhogb9waqvy6un7g5z1fmc54qmji50r5me2lvjx5ppg0seugvdlb9og9k2ui8bcgaqgvbyx0fgf47p4ypgdv2i5gyn8ifvq1fsmevvlgoebr7pfpe10f1bghtpaglgbycxj9kltebptr8hx333gak9dee0xo9vetiydq126qr4hycim7gfd0pj02qzy8jyif23nd5ifcsjzf7wt8c0d4t8mp55cp == \b\f\l\h\9\9\w\w\x\8\f\w\4\e\3\u\d\k\n\h\z\t\8\4\a\2\u\r\j\6\1\q\l\c\o\h\a\3\l\2\i\9\3\p\8\8\s\c\s\x\w\0\9\7\k\l\7\s\e\6\j\i\3\6\b\r\p\4\4\o\f\a\j\3\w\4\j\a\5\7\y\z\l\3\c\h\a\9\d\q\p\c\e\q\b\q\l\9\c\x\h\y\s\u\u\0\h\k\l\v\i\s\k\r\e\n\8\d\r\c\q\h\u\1\o\u\2\n\k\i\p\q\v\v\7\n\o\z\o\h\w\a\3\4\k\j\q\r\p\p\l\u\9\8\z\w\x\7\q\o\t\i\e\x\1\g\p\o\h\t\l\0\r\n\q\w\0\v\m\b\s\9\k\6\3\1\o\f\m\4\5\k\e\p\7\s\l\e\9\v\9\l\s\z\i\b\l\p\9\k\7\p\5\1\2\d\9\p\0\g\j\x\k\r\7\i\y\8\6\4\5\e\j\y\b\y\g\n\n\t\a\k\u\9\4\y\o\h\8\4\3\r\j\n\l\3\z\a\i\3\t\r\1\0\4\m\j\s\x\8\2\4\5\k\o\4\8\x\7\o\5\f\7\8\7\v\h\w\6\t\5\x\j\h\o\g\b\9\w\a\q\v\y\6\u\n\7\g\5\z\1\f\m\c\5\4\q\m\j\i\5\0\r\5\m\e\2\l\v\j\x\5\p\p\g\0\s\e\u\g\v\d\l\b\9\o\g\9\k\2\u\i\8\b\c\g\a\q\g\v\b\y\x\0\f\g\f\4\7\p\4\y\p\g\d\v\2\i\5\g\y\n\8\i\f\v\q\1\f\s\m\e\v\v\l\g\o\e\b\r\7\p\f\p\e\1\0\f\1\b\g\h\t\p\a\g\l\g\b\y\c\x\j\9\k\l\t\e\b\p\t\r\8\h\x\3\3\3\g\a\k\9\d\e\e\0\x\o\9\v\e\t\i\y\d\q\1\2\6\q\r\4\h\y\c\i\m\7\g\f\d\0\p\j\0\2\q\z\y\8\j\y\i\f\2\3\n\d\5\i\f\c\s\j\z\f\7\w\t\8\c\0\d\4\t\8\m\p\5\5\c\p ]] 00:07:47.288 00:07:47.288 real 0m1.701s 00:07:47.288 user 0m0.944s 00:07:47.288 sys 0m0.427s 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.288 ************************************ 00:07:47.288 START TEST dd_flag_noatime_forced_aio 00:07:47.288 ************************************ 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721857672 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721857672 00:07:47.288 21:47:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:48.220 21:47:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.220 [2024-07-24 21:47:53.923046] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:48.220 [2024-07-24 21:47:53.923142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75445 ] 00:07:48.478 [2024-07-24 21:47:54.050706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.478 [2024-07-24 21:47:54.144462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.735 [2024-07-24 21:47:54.196718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.735  Copying: 512/512 [B] (average 500 kBps) 00:07:48.735 00:07:48.735 21:47:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.735 21:47:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721857672 )) 00:07:48.735 21:47:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.735 21:47:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721857672 )) 00:07:48.735 21:47:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.993 [2024-07-24 21:47:54.509093] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:48.993 [2024-07-24 21:47:54.509240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75457 ] 00:07:48.993 [2024-07-24 21:47:54.652430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.250 [2024-07-24 21:47:54.746876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.250 [2024-07-24 21:47:54.799063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.508  Copying: 512/512 [B] (average 500 kBps) 00:07:49.508 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721857674 )) 00:07:49.508 00:07:49.508 real 0m2.187s 00:07:49.508 user 0m0.643s 00:07:49.508 sys 0m0.307s 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.508 ************************************ 00:07:49.508 END TEST dd_flag_noatime_forced_aio 00:07:49.508 ************************************ 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.508 ************************************ 00:07:49.508 START TEST dd_flags_misc_forced_aio 00:07:49.508 ************************************ 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.508 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:49.508 [2024-07-24 21:47:55.148863] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:49.508 [2024-07-24 21:47:55.148971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75484 ] 00:07:49.765 [2024-07-24 21:47:55.285978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.765 [2024-07-24 21:47:55.386498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.765 [2024-07-24 21:47:55.443726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.023  Copying: 512/512 [B] (average 500 kBps) 00:07:50.023 00:07:50.023 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0bckn4bqkvrihr4stkovoaf3uupadjhtizx4rrseu5ot5rfz2boyfhtz99wqbeibwh3r1wua7ezntazu0gsfm277kivgrudkwti4edmxjgl6jllfguthswrdicgfhmdrjoyj1iywxr5xfmqn5l9ct7fxzpfzzxpdg50eajpy0f8mx506odhz3u3e1dt7c1qc2l2klirmi2jskid00vcxpbw3qqf2etltv3aqi4mo4nlvib12wwn8odae3y7556s9bmwcqqvvy6g0ouxwbmts88lt3ktebmoav0qpo6s3wekmd6q7vev5wlhbxj1nyk1sjvstwsugc4j29n8eczqsrz8g23xsoayy1mvb5z5dzum5k38qq6q31afcniqdo44w8ve1c0kapaz6531o1uaa4hrlc0blyp45cxk4338esf4h0dyq4llgrjfsnce4opixp8l769ywgpupdv0ncxytmoqytr7m5d80zwe7dj3qx16kin1vyyc67hjtdk0aevf1 == \0\b\c\k\n\4\b\q\k\v\r\i\h\r\4\s\t\k\o\v\o\a\f\3\u\u\p\a\d\j\h\t\i\z\x\4\r\r\s\e\u\5\o\t\5\r\f\z\2\b\o\y\f\h\t\z\9\9\w\q\b\e\i\b\w\h\3\r\1\w\u\a\7\e\z\n\t\a\z\u\0\g\s\f\m\2\7\7\k\i\v\g\r\u\d\k\w\t\i\4\e\d\m\x\j\g\l\6\j\l\l\f\g\u\t\h\s\w\r\d\i\c\g\f\h\m\d\r\j\o\y\j\1\i\y\w\x\r\5\x\f\m\q\n\5\l\9\c\t\7\f\x\z\p\f\z\z\x\p\d\g\5\0\e\a\j\p\y\0\f\8\m\x\5\0\6\o\d\h\z\3\u\3\e\1\d\t\7\c\1\q\c\2\l\2\k\l\i\r\m\i\2\j\s\k\i\d\0\0\v\c\x\p\b\w\3\q\q\f\2\e\t\l\t\v\3\a\q\i\4\m\o\4\n\l\v\i\b\1\2\w\w\n\8\o\d\a\e\3\y\7\5\5\6\s\9\b\m\w\c\q\q\v\v\y\6\g\0\o\u\x\w\b\m\t\s\8\8\l\t\3\k\t\e\b\m\o\a\v\0\q\p\o\6\s\3\w\e\k\m\d\6\q\7\v\e\v\5\w\l\h\b\x\j\1\n\y\k\1\s\j\v\s\t\w\s\u\g\c\4\j\2\9\n\8\e\c\z\q\s\r\z\8\g\2\3\x\s\o\a\y\y\1\m\v\b\5\z\5\d\z\u\m\5\k\3\8\q\q\6\q\3\1\a\f\c\n\i\q\d\o\4\4\w\8\v\e\1\c\0\k\a\p\a\z\6\5\3\1\o\1\u\a\a\4\h\r\l\c\0\b\l\y\p\4\5\c\x\k\4\3\3\8\e\s\f\4\h\0\d\y\q\4\l\l\g\r\j\f\s\n\c\e\4\o\p\i\x\p\8\l\7\6\9\y\w\g\p\u\p\d\v\0\n\c\x\y\t\m\o\q\y\t\r\7\m\5\d\8\0\z\w\e\7\d\j\3\q\x\1\6\k\i\n\1\v\y\y\c\6\7\h\j\t\d\k\0\a\e\v\f\1 ]] 00:07:50.023 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.023 21:47:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:50.282 [2024-07-24 21:47:55.770029] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:50.282 [2024-07-24 21:47:55.770180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75491 ] 00:07:50.282 [2024-07-24 21:47:55.918468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.540 [2024-07-24 21:47:56.014242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.540 [2024-07-24 21:47:56.066822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.798  Copying: 512/512 [B] (average 500 kBps) 00:07:50.798 00:07:50.798 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0bckn4bqkvrihr4stkovoaf3uupadjhtizx4rrseu5ot5rfz2boyfhtz99wqbeibwh3r1wua7ezntazu0gsfm277kivgrudkwti4edmxjgl6jllfguthswrdicgfhmdrjoyj1iywxr5xfmqn5l9ct7fxzpfzzxpdg50eajpy0f8mx506odhz3u3e1dt7c1qc2l2klirmi2jskid00vcxpbw3qqf2etltv3aqi4mo4nlvib12wwn8odae3y7556s9bmwcqqvvy6g0ouxwbmts88lt3ktebmoav0qpo6s3wekmd6q7vev5wlhbxj1nyk1sjvstwsugc4j29n8eczqsrz8g23xsoayy1mvb5z5dzum5k38qq6q31afcniqdo44w8ve1c0kapaz6531o1uaa4hrlc0blyp45cxk4338esf4h0dyq4llgrjfsnce4opixp8l769ywgpupdv0ncxytmoqytr7m5d80zwe7dj3qx16kin1vyyc67hjtdk0aevf1 == \0\b\c\k\n\4\b\q\k\v\r\i\h\r\4\s\t\k\o\v\o\a\f\3\u\u\p\a\d\j\h\t\i\z\x\4\r\r\s\e\u\5\o\t\5\r\f\z\2\b\o\y\f\h\t\z\9\9\w\q\b\e\i\b\w\h\3\r\1\w\u\a\7\e\z\n\t\a\z\u\0\g\s\f\m\2\7\7\k\i\v\g\r\u\d\k\w\t\i\4\e\d\m\x\j\g\l\6\j\l\l\f\g\u\t\h\s\w\r\d\i\c\g\f\h\m\d\r\j\o\y\j\1\i\y\w\x\r\5\x\f\m\q\n\5\l\9\c\t\7\f\x\z\p\f\z\z\x\p\d\g\5\0\e\a\j\p\y\0\f\8\m\x\5\0\6\o\d\h\z\3\u\3\e\1\d\t\7\c\1\q\c\2\l\2\k\l\i\r\m\i\2\j\s\k\i\d\0\0\v\c\x\p\b\w\3\q\q\f\2\e\t\l\t\v\3\a\q\i\4\m\o\4\n\l\v\i\b\1\2\w\w\n\8\o\d\a\e\3\y\7\5\5\6\s\9\b\m\w\c\q\q\v\v\y\6\g\0\o\u\x\w\b\m\t\s\8\8\l\t\3\k\t\e\b\m\o\a\v\0\q\p\o\6\s\3\w\e\k\m\d\6\q\7\v\e\v\5\w\l\h\b\x\j\1\n\y\k\1\s\j\v\s\t\w\s\u\g\c\4\j\2\9\n\8\e\c\z\q\s\r\z\8\g\2\3\x\s\o\a\y\y\1\m\v\b\5\z\5\d\z\u\m\5\k\3\8\q\q\6\q\3\1\a\f\c\n\i\q\d\o\4\4\w\8\v\e\1\c\0\k\a\p\a\z\6\5\3\1\o\1\u\a\a\4\h\r\l\c\0\b\l\y\p\4\5\c\x\k\4\3\3\8\e\s\f\4\h\0\d\y\q\4\l\l\g\r\j\f\s\n\c\e\4\o\p\i\x\p\8\l\7\6\9\y\w\g\p\u\p\d\v\0\n\c\x\y\t\m\o\q\y\t\r\7\m\5\d\8\0\z\w\e\7\d\j\3\q\x\1\6\k\i\n\1\v\y\y\c\6\7\h\j\t\d\k\0\a\e\v\f\1 ]] 00:07:50.798 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.798 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:50.798 [2024-07-24 21:47:56.363972] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:50.798 [2024-07-24 21:47:56.364076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75499 ] 00:07:50.798 [2024-07-24 21:47:56.502223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.056 [2024-07-24 21:47:56.597967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.056 [2024-07-24 21:47:56.650485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.314  Copying: 512/512 [B] (average 166 kBps) 00:07:51.314 00:07:51.315 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0bckn4bqkvrihr4stkovoaf3uupadjhtizx4rrseu5ot5rfz2boyfhtz99wqbeibwh3r1wua7ezntazu0gsfm277kivgrudkwti4edmxjgl6jllfguthswrdicgfhmdrjoyj1iywxr5xfmqn5l9ct7fxzpfzzxpdg50eajpy0f8mx506odhz3u3e1dt7c1qc2l2klirmi2jskid00vcxpbw3qqf2etltv3aqi4mo4nlvib12wwn8odae3y7556s9bmwcqqvvy6g0ouxwbmts88lt3ktebmoav0qpo6s3wekmd6q7vev5wlhbxj1nyk1sjvstwsugc4j29n8eczqsrz8g23xsoayy1mvb5z5dzum5k38qq6q31afcniqdo44w8ve1c0kapaz6531o1uaa4hrlc0blyp45cxk4338esf4h0dyq4llgrjfsnce4opixp8l769ywgpupdv0ncxytmoqytr7m5d80zwe7dj3qx16kin1vyyc67hjtdk0aevf1 == \0\b\c\k\n\4\b\q\k\v\r\i\h\r\4\s\t\k\o\v\o\a\f\3\u\u\p\a\d\j\h\t\i\z\x\4\r\r\s\e\u\5\o\t\5\r\f\z\2\b\o\y\f\h\t\z\9\9\w\q\b\e\i\b\w\h\3\r\1\w\u\a\7\e\z\n\t\a\z\u\0\g\s\f\m\2\7\7\k\i\v\g\r\u\d\k\w\t\i\4\e\d\m\x\j\g\l\6\j\l\l\f\g\u\t\h\s\w\r\d\i\c\g\f\h\m\d\r\j\o\y\j\1\i\y\w\x\r\5\x\f\m\q\n\5\l\9\c\t\7\f\x\z\p\f\z\z\x\p\d\g\5\0\e\a\j\p\y\0\f\8\m\x\5\0\6\o\d\h\z\3\u\3\e\1\d\t\7\c\1\q\c\2\l\2\k\l\i\r\m\i\2\j\s\k\i\d\0\0\v\c\x\p\b\w\3\q\q\f\2\e\t\l\t\v\3\a\q\i\4\m\o\4\n\l\v\i\b\1\2\w\w\n\8\o\d\a\e\3\y\7\5\5\6\s\9\b\m\w\c\q\q\v\v\y\6\g\0\o\u\x\w\b\m\t\s\8\8\l\t\3\k\t\e\b\m\o\a\v\0\q\p\o\6\s\3\w\e\k\m\d\6\q\7\v\e\v\5\w\l\h\b\x\j\1\n\y\k\1\s\j\v\s\t\w\s\u\g\c\4\j\2\9\n\8\e\c\z\q\s\r\z\8\g\2\3\x\s\o\a\y\y\1\m\v\b\5\z\5\d\z\u\m\5\k\3\8\q\q\6\q\3\1\a\f\c\n\i\q\d\o\4\4\w\8\v\e\1\c\0\k\a\p\a\z\6\5\3\1\o\1\u\a\a\4\h\r\l\c\0\b\l\y\p\4\5\c\x\k\4\3\3\8\e\s\f\4\h\0\d\y\q\4\l\l\g\r\j\f\s\n\c\e\4\o\p\i\x\p\8\l\7\6\9\y\w\g\p\u\p\d\v\0\n\c\x\y\t\m\o\q\y\t\r\7\m\5\d\8\0\z\w\e\7\d\j\3\q\x\1\6\k\i\n\1\v\y\y\c\6\7\h\j\t\d\k\0\a\e\v\f\1 ]] 00:07:51.315 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.315 21:47:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:51.315 [2024-07-24 21:47:56.960418] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:51.315 [2024-07-24 21:47:56.960544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75512 ] 00:07:51.574 [2024-07-24 21:47:57.106748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.574 [2024-07-24 21:47:57.201606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.574 [2024-07-24 21:47:57.254140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.832  Copying: 512/512 [B] (average 500 kBps) 00:07:51.832 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0bckn4bqkvrihr4stkovoaf3uupadjhtizx4rrseu5ot5rfz2boyfhtz99wqbeibwh3r1wua7ezntazu0gsfm277kivgrudkwti4edmxjgl6jllfguthswrdicgfhmdrjoyj1iywxr5xfmqn5l9ct7fxzpfzzxpdg50eajpy0f8mx506odhz3u3e1dt7c1qc2l2klirmi2jskid00vcxpbw3qqf2etltv3aqi4mo4nlvib12wwn8odae3y7556s9bmwcqqvvy6g0ouxwbmts88lt3ktebmoav0qpo6s3wekmd6q7vev5wlhbxj1nyk1sjvstwsugc4j29n8eczqsrz8g23xsoayy1mvb5z5dzum5k38qq6q31afcniqdo44w8ve1c0kapaz6531o1uaa4hrlc0blyp45cxk4338esf4h0dyq4llgrjfsnce4opixp8l769ywgpupdv0ncxytmoqytr7m5d80zwe7dj3qx16kin1vyyc67hjtdk0aevf1 == \0\b\c\k\n\4\b\q\k\v\r\i\h\r\4\s\t\k\o\v\o\a\f\3\u\u\p\a\d\j\h\t\i\z\x\4\r\r\s\e\u\5\o\t\5\r\f\z\2\b\o\y\f\h\t\z\9\9\w\q\b\e\i\b\w\h\3\r\1\w\u\a\7\e\z\n\t\a\z\u\0\g\s\f\m\2\7\7\k\i\v\g\r\u\d\k\w\t\i\4\e\d\m\x\j\g\l\6\j\l\l\f\g\u\t\h\s\w\r\d\i\c\g\f\h\m\d\r\j\o\y\j\1\i\y\w\x\r\5\x\f\m\q\n\5\l\9\c\t\7\f\x\z\p\f\z\z\x\p\d\g\5\0\e\a\j\p\y\0\f\8\m\x\5\0\6\o\d\h\z\3\u\3\e\1\d\t\7\c\1\q\c\2\l\2\k\l\i\r\m\i\2\j\s\k\i\d\0\0\v\c\x\p\b\w\3\q\q\f\2\e\t\l\t\v\3\a\q\i\4\m\o\4\n\l\v\i\b\1\2\w\w\n\8\o\d\a\e\3\y\7\5\5\6\s\9\b\m\w\c\q\q\v\v\y\6\g\0\o\u\x\w\b\m\t\s\8\8\l\t\3\k\t\e\b\m\o\a\v\0\q\p\o\6\s\3\w\e\k\m\d\6\q\7\v\e\v\5\w\l\h\b\x\j\1\n\y\k\1\s\j\v\s\t\w\s\u\g\c\4\j\2\9\n\8\e\c\z\q\s\r\z\8\g\2\3\x\s\o\a\y\y\1\m\v\b\5\z\5\d\z\u\m\5\k\3\8\q\q\6\q\3\1\a\f\c\n\i\q\d\o\4\4\w\8\v\e\1\c\0\k\a\p\a\z\6\5\3\1\o\1\u\a\a\4\h\r\l\c\0\b\l\y\p\4\5\c\x\k\4\3\3\8\e\s\f\4\h\0\d\y\q\4\l\l\g\r\j\f\s\n\c\e\4\o\p\i\x\p\8\l\7\6\9\y\w\g\p\u\p\d\v\0\n\c\x\y\t\m\o\q\y\t\r\7\m\5\d\8\0\z\w\e\7\d\j\3\q\x\1\6\k\i\n\1\v\y\y\c\6\7\h\j\t\d\k\0\a\e\v\f\1 ]] 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.832 21:47:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:52.091 [2024-07-24 21:47:57.552840] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:52.091 [2024-07-24 21:47:57.552976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75514 ] 00:07:52.091 [2024-07-24 21:47:57.693761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.091 [2024-07-24 21:47:57.790136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.349 [2024-07-24 21:47:57.843901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.607  Copying: 512/512 [B] (average 500 kBps) 00:07:52.607 00:07:52.607 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ta1l90vbv9pejcu99vhfdxvkk4rh3lmueznb0e9sh0015iwgs2jczuskxnzbe1d0wvhclznv1cxdfd83vj0dy4293yohp1abadey3lb2ylnc3p3tpc9dyi5dfpv6d2k8k587sbvoiy2hgmw6fgpl4vichb75uhs135895oc31wxzrd4i5kozmqa1zlzsc3mbutkilgqgqyuw4hnk7azfbj8mpxppvsnp1q3t196n79td757d988je78ct28pyzt32xudcb27mag22t4w4xvc87ie8bhajb8wcgvc8naalp3l1oooty4wxg1ugwtezmzpzwtihe5r8aqxyjvax3vb751fjzc3ddcpylvsvucundjreulb20crqsyqypelkh13bvt1l7sd4ziwfbnb73ho0t4hgxenzt40c7nkjpx72c0xc2fjmh9duh9ad4e2lrvjw5ah8uazmeyh5yp785p255ajyz36ahob6pw9vwra1ckw7ov2bimjwebotwiunz06 == \t\a\1\l\9\0\v\b\v\9\p\e\j\c\u\9\9\v\h\f\d\x\v\k\k\4\r\h\3\l\m\u\e\z\n\b\0\e\9\s\h\0\0\1\5\i\w\g\s\2\j\c\z\u\s\k\x\n\z\b\e\1\d\0\w\v\h\c\l\z\n\v\1\c\x\d\f\d\8\3\v\j\0\d\y\4\2\9\3\y\o\h\p\1\a\b\a\d\e\y\3\l\b\2\y\l\n\c\3\p\3\t\p\c\9\d\y\i\5\d\f\p\v\6\d\2\k\8\k\5\8\7\s\b\v\o\i\y\2\h\g\m\w\6\f\g\p\l\4\v\i\c\h\b\7\5\u\h\s\1\3\5\8\9\5\o\c\3\1\w\x\z\r\d\4\i\5\k\o\z\m\q\a\1\z\l\z\s\c\3\m\b\u\t\k\i\l\g\q\g\q\y\u\w\4\h\n\k\7\a\z\f\b\j\8\m\p\x\p\p\v\s\n\p\1\q\3\t\1\9\6\n\7\9\t\d\7\5\7\d\9\8\8\j\e\7\8\c\t\2\8\p\y\z\t\3\2\x\u\d\c\b\2\7\m\a\g\2\2\t\4\w\4\x\v\c\8\7\i\e\8\b\h\a\j\b\8\w\c\g\v\c\8\n\a\a\l\p\3\l\1\o\o\o\t\y\4\w\x\g\1\u\g\w\t\e\z\m\z\p\z\w\t\i\h\e\5\r\8\a\q\x\y\j\v\a\x\3\v\b\7\5\1\f\j\z\c\3\d\d\c\p\y\l\v\s\v\u\c\u\n\d\j\r\e\u\l\b\2\0\c\r\q\s\y\q\y\p\e\l\k\h\1\3\b\v\t\1\l\7\s\d\4\z\i\w\f\b\n\b\7\3\h\o\0\t\4\h\g\x\e\n\z\t\4\0\c\7\n\k\j\p\x\7\2\c\0\x\c\2\f\j\m\h\9\d\u\h\9\a\d\4\e\2\l\r\v\j\w\5\a\h\8\u\a\z\m\e\y\h\5\y\p\7\8\5\p\2\5\5\a\j\y\z\3\6\a\h\o\b\6\p\w\9\v\w\r\a\1\c\k\w\7\o\v\2\b\i\m\j\w\e\b\o\t\w\i\u\n\z\0\6 ]] 00:07:52.607 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.607 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:52.607 [2024-07-24 21:47:58.129340] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:52.607 [2024-07-24 21:47:58.129433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75527 ] 00:07:52.607 [2024-07-24 21:47:58.266802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.865 [2024-07-24 21:47:58.358684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.865 [2024-07-24 21:47:58.413150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.124  Copying: 512/512 [B] (average 500 kBps) 00:07:53.124 00:07:53.124 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ta1l90vbv9pejcu99vhfdxvkk4rh3lmueznb0e9sh0015iwgs2jczuskxnzbe1d0wvhclznv1cxdfd83vj0dy4293yohp1abadey3lb2ylnc3p3tpc9dyi5dfpv6d2k8k587sbvoiy2hgmw6fgpl4vichb75uhs135895oc31wxzrd4i5kozmqa1zlzsc3mbutkilgqgqyuw4hnk7azfbj8mpxppvsnp1q3t196n79td757d988je78ct28pyzt32xudcb27mag22t4w4xvc87ie8bhajb8wcgvc8naalp3l1oooty4wxg1ugwtezmzpzwtihe5r8aqxyjvax3vb751fjzc3ddcpylvsvucundjreulb20crqsyqypelkh13bvt1l7sd4ziwfbnb73ho0t4hgxenzt40c7nkjpx72c0xc2fjmh9duh9ad4e2lrvjw5ah8uazmeyh5yp785p255ajyz36ahob6pw9vwra1ckw7ov2bimjwebotwiunz06 == \t\a\1\l\9\0\v\b\v\9\p\e\j\c\u\9\9\v\h\f\d\x\v\k\k\4\r\h\3\l\m\u\e\z\n\b\0\e\9\s\h\0\0\1\5\i\w\g\s\2\j\c\z\u\s\k\x\n\z\b\e\1\d\0\w\v\h\c\l\z\n\v\1\c\x\d\f\d\8\3\v\j\0\d\y\4\2\9\3\y\o\h\p\1\a\b\a\d\e\y\3\l\b\2\y\l\n\c\3\p\3\t\p\c\9\d\y\i\5\d\f\p\v\6\d\2\k\8\k\5\8\7\s\b\v\o\i\y\2\h\g\m\w\6\f\g\p\l\4\v\i\c\h\b\7\5\u\h\s\1\3\5\8\9\5\o\c\3\1\w\x\z\r\d\4\i\5\k\o\z\m\q\a\1\z\l\z\s\c\3\m\b\u\t\k\i\l\g\q\g\q\y\u\w\4\h\n\k\7\a\z\f\b\j\8\m\p\x\p\p\v\s\n\p\1\q\3\t\1\9\6\n\7\9\t\d\7\5\7\d\9\8\8\j\e\7\8\c\t\2\8\p\y\z\t\3\2\x\u\d\c\b\2\7\m\a\g\2\2\t\4\w\4\x\v\c\8\7\i\e\8\b\h\a\j\b\8\w\c\g\v\c\8\n\a\a\l\p\3\l\1\o\o\o\t\y\4\w\x\g\1\u\g\w\t\e\z\m\z\p\z\w\t\i\h\e\5\r\8\a\q\x\y\j\v\a\x\3\v\b\7\5\1\f\j\z\c\3\d\d\c\p\y\l\v\s\v\u\c\u\n\d\j\r\e\u\l\b\2\0\c\r\q\s\y\q\y\p\e\l\k\h\1\3\b\v\t\1\l\7\s\d\4\z\i\w\f\b\n\b\7\3\h\o\0\t\4\h\g\x\e\n\z\t\4\0\c\7\n\k\j\p\x\7\2\c\0\x\c\2\f\j\m\h\9\d\u\h\9\a\d\4\e\2\l\r\v\j\w\5\a\h\8\u\a\z\m\e\y\h\5\y\p\7\8\5\p\2\5\5\a\j\y\z\3\6\a\h\o\b\6\p\w\9\v\w\r\a\1\c\k\w\7\o\v\2\b\i\m\j\w\e\b\o\t\w\i\u\n\z\0\6 ]] 00:07:53.124 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.124 21:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.124 [2024-07-24 21:47:58.692494] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:53.124 [2024-07-24 21:47:58.692583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75533 ] 00:07:53.124 [2024-07-24 21:47:58.824565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.382 [2024-07-24 21:47:58.914162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.382 [2024-07-24 21:47:58.968103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.640  Copying: 512/512 [B] (average 500 kBps) 00:07:53.640 00:07:53.640 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ta1l90vbv9pejcu99vhfdxvkk4rh3lmueznb0e9sh0015iwgs2jczuskxnzbe1d0wvhclznv1cxdfd83vj0dy4293yohp1abadey3lb2ylnc3p3tpc9dyi5dfpv6d2k8k587sbvoiy2hgmw6fgpl4vichb75uhs135895oc31wxzrd4i5kozmqa1zlzsc3mbutkilgqgqyuw4hnk7azfbj8mpxppvsnp1q3t196n79td757d988je78ct28pyzt32xudcb27mag22t4w4xvc87ie8bhajb8wcgvc8naalp3l1oooty4wxg1ugwtezmzpzwtihe5r8aqxyjvax3vb751fjzc3ddcpylvsvucundjreulb20crqsyqypelkh13bvt1l7sd4ziwfbnb73ho0t4hgxenzt40c7nkjpx72c0xc2fjmh9duh9ad4e2lrvjw5ah8uazmeyh5yp785p255ajyz36ahob6pw9vwra1ckw7ov2bimjwebotwiunz06 == \t\a\1\l\9\0\v\b\v\9\p\e\j\c\u\9\9\v\h\f\d\x\v\k\k\4\r\h\3\l\m\u\e\z\n\b\0\e\9\s\h\0\0\1\5\i\w\g\s\2\j\c\z\u\s\k\x\n\z\b\e\1\d\0\w\v\h\c\l\z\n\v\1\c\x\d\f\d\8\3\v\j\0\d\y\4\2\9\3\y\o\h\p\1\a\b\a\d\e\y\3\l\b\2\y\l\n\c\3\p\3\t\p\c\9\d\y\i\5\d\f\p\v\6\d\2\k\8\k\5\8\7\s\b\v\o\i\y\2\h\g\m\w\6\f\g\p\l\4\v\i\c\h\b\7\5\u\h\s\1\3\5\8\9\5\o\c\3\1\w\x\z\r\d\4\i\5\k\o\z\m\q\a\1\z\l\z\s\c\3\m\b\u\t\k\i\l\g\q\g\q\y\u\w\4\h\n\k\7\a\z\f\b\j\8\m\p\x\p\p\v\s\n\p\1\q\3\t\1\9\6\n\7\9\t\d\7\5\7\d\9\8\8\j\e\7\8\c\t\2\8\p\y\z\t\3\2\x\u\d\c\b\2\7\m\a\g\2\2\t\4\w\4\x\v\c\8\7\i\e\8\b\h\a\j\b\8\w\c\g\v\c\8\n\a\a\l\p\3\l\1\o\o\o\t\y\4\w\x\g\1\u\g\w\t\e\z\m\z\p\z\w\t\i\h\e\5\r\8\a\q\x\y\j\v\a\x\3\v\b\7\5\1\f\j\z\c\3\d\d\c\p\y\l\v\s\v\u\c\u\n\d\j\r\e\u\l\b\2\0\c\r\q\s\y\q\y\p\e\l\k\h\1\3\b\v\t\1\l\7\s\d\4\z\i\w\f\b\n\b\7\3\h\o\0\t\4\h\g\x\e\n\z\t\4\0\c\7\n\k\j\p\x\7\2\c\0\x\c\2\f\j\m\h\9\d\u\h\9\a\d\4\e\2\l\r\v\j\w\5\a\h\8\u\a\z\m\e\y\h\5\y\p\7\8\5\p\2\5\5\a\j\y\z\3\6\a\h\o\b\6\p\w\9\v\w\r\a\1\c\k\w\7\o\v\2\b\i\m\j\w\e\b\o\t\w\i\u\n\z\0\6 ]] 00:07:53.640 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.640 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:53.640 [2024-07-24 21:47:59.258291] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:53.641 [2024-07-24 21:47:59.258386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75542 ] 00:07:53.898 [2024-07-24 21:47:59.393860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.898 [2024-07-24 21:47:59.484193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.899 [2024-07-24 21:47:59.537085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.157  Copying: 512/512 [B] (average 500 kBps) 00:07:54.157 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ta1l90vbv9pejcu99vhfdxvkk4rh3lmueznb0e9sh0015iwgs2jczuskxnzbe1d0wvhclznv1cxdfd83vj0dy4293yohp1abadey3lb2ylnc3p3tpc9dyi5dfpv6d2k8k587sbvoiy2hgmw6fgpl4vichb75uhs135895oc31wxzrd4i5kozmqa1zlzsc3mbutkilgqgqyuw4hnk7azfbj8mpxppvsnp1q3t196n79td757d988je78ct28pyzt32xudcb27mag22t4w4xvc87ie8bhajb8wcgvc8naalp3l1oooty4wxg1ugwtezmzpzwtihe5r8aqxyjvax3vb751fjzc3ddcpylvsvucundjreulb20crqsyqypelkh13bvt1l7sd4ziwfbnb73ho0t4hgxenzt40c7nkjpx72c0xc2fjmh9duh9ad4e2lrvjw5ah8uazmeyh5yp785p255ajyz36ahob6pw9vwra1ckw7ov2bimjwebotwiunz06 == \t\a\1\l\9\0\v\b\v\9\p\e\j\c\u\9\9\v\h\f\d\x\v\k\k\4\r\h\3\l\m\u\e\z\n\b\0\e\9\s\h\0\0\1\5\i\w\g\s\2\j\c\z\u\s\k\x\n\z\b\e\1\d\0\w\v\h\c\l\z\n\v\1\c\x\d\f\d\8\3\v\j\0\d\y\4\2\9\3\y\o\h\p\1\a\b\a\d\e\y\3\l\b\2\y\l\n\c\3\p\3\t\p\c\9\d\y\i\5\d\f\p\v\6\d\2\k\8\k\5\8\7\s\b\v\o\i\y\2\h\g\m\w\6\f\g\p\l\4\v\i\c\h\b\7\5\u\h\s\1\3\5\8\9\5\o\c\3\1\w\x\z\r\d\4\i\5\k\o\z\m\q\a\1\z\l\z\s\c\3\m\b\u\t\k\i\l\g\q\g\q\y\u\w\4\h\n\k\7\a\z\f\b\j\8\m\p\x\p\p\v\s\n\p\1\q\3\t\1\9\6\n\7\9\t\d\7\5\7\d\9\8\8\j\e\7\8\c\t\2\8\p\y\z\t\3\2\x\u\d\c\b\2\7\m\a\g\2\2\t\4\w\4\x\v\c\8\7\i\e\8\b\h\a\j\b\8\w\c\g\v\c\8\n\a\a\l\p\3\l\1\o\o\o\t\y\4\w\x\g\1\u\g\w\t\e\z\m\z\p\z\w\t\i\h\e\5\r\8\a\q\x\y\j\v\a\x\3\v\b\7\5\1\f\j\z\c\3\d\d\c\p\y\l\v\s\v\u\c\u\n\d\j\r\e\u\l\b\2\0\c\r\q\s\y\q\y\p\e\l\k\h\1\3\b\v\t\1\l\7\s\d\4\z\i\w\f\b\n\b\7\3\h\o\0\t\4\h\g\x\e\n\z\t\4\0\c\7\n\k\j\p\x\7\2\c\0\x\c\2\f\j\m\h\9\d\u\h\9\a\d\4\e\2\l\r\v\j\w\5\a\h\8\u\a\z\m\e\y\h\5\y\p\7\8\5\p\2\5\5\a\j\y\z\3\6\a\h\o\b\6\p\w\9\v\w\r\a\1\c\k\w\7\o\v\2\b\i\m\j\w\e\b\o\t\w\i\u\n\z\0\6 ]] 00:07:54.157 00:07:54.157 real 0m4.689s 00:07:54.157 user 0m2.536s 00:07:54.157 sys 0m1.187s 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.157 ************************************ 00:07:54.157 END TEST dd_flags_misc_forced_aio 00:07:54.157 ************************************ 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:54.157 00:07:54.157 real 0m21.022s 00:07:54.157 user 0m10.342s 00:07:54.157 sys 0m6.546s 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.157 21:47:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.157 ************************************ 00:07:54.157 END TEST spdk_dd_posix 00:07:54.157 ************************************ 00:07:54.157 21:47:59 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:54.157 21:47:59 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.157 21:47:59 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.157 21:47:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.157 ************************************ 00:07:54.157 START TEST spdk_dd_malloc 00:07:54.157 ************************************ 00:07:54.157 21:47:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:54.416 * Looking for test storage... 00:07:54.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 ************************************ 00:07:54.416 START TEST dd_malloc_copy 00:07:54.416 ************************************ 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:54.416 21:47:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 [2024-07-24 21:48:00.008191] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:54.416 [2024-07-24 21:48:00.009283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75616 ] 00:07:54.416 { 00:07:54.416 "subsystems": [ 00:07:54.416 { 00:07:54.416 "subsystem": "bdev", 00:07:54.416 "config": [ 00:07:54.416 { 00:07:54.416 "params": { 00:07:54.416 "block_size": 512, 00:07:54.416 "num_blocks": 1048576, 00:07:54.416 "name": "malloc0" 00:07:54.416 }, 00:07:54.416 "method": "bdev_malloc_create" 00:07:54.416 }, 00:07:54.416 { 00:07:54.416 "params": { 00:07:54.416 "block_size": 512, 00:07:54.416 "num_blocks": 1048576, 00:07:54.416 "name": "malloc1" 00:07:54.416 }, 00:07:54.416 "method": "bdev_malloc_create" 00:07:54.416 }, 00:07:54.416 { 00:07:54.416 "method": "bdev_wait_for_examine" 00:07:54.416 } 00:07:54.416 ] 00:07:54.416 } 00:07:54.416 ] 00:07:54.416 } 00:07:54.675 [2024-07-24 21:48:00.158880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.675 [2024-07-24 21:48:00.242124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.675 [2024-07-24 21:48:00.295015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.122  Copying: 202/512 [MB] (202 MBps) Copying: 405/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 202 MBps) 00:07:58.122 00:07:58.122 21:48:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:58.122 21:48:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:58.122 21:48:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.122 21:48:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.122 [2024-07-24 21:48:03.766828] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:58.122 [2024-07-24 21:48:03.766937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75658 ] 00:07:58.122 { 00:07:58.122 "subsystems": [ 00:07:58.122 { 00:07:58.122 "subsystem": "bdev", 00:07:58.122 "config": [ 00:07:58.122 { 00:07:58.122 "params": { 00:07:58.122 "block_size": 512, 00:07:58.122 "num_blocks": 1048576, 00:07:58.122 "name": "malloc0" 00:07:58.122 }, 00:07:58.122 "method": "bdev_malloc_create" 00:07:58.122 }, 00:07:58.122 { 00:07:58.122 "params": { 00:07:58.122 "block_size": 512, 00:07:58.122 "num_blocks": 1048576, 00:07:58.122 "name": "malloc1" 00:07:58.122 }, 00:07:58.122 "method": "bdev_malloc_create" 00:07:58.122 }, 00:07:58.122 { 00:07:58.122 "method": "bdev_wait_for_examine" 00:07:58.122 } 00:07:58.122 ] 00:07:58.122 } 00:07:58.122 ] 00:07:58.122 } 00:07:58.380 [2024-07-24 21:48:03.908601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.380 [2024-07-24 21:48:04.005116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.380 [2024-07-24 21:48:04.062856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:01.954  Copying: 209/512 [MB] (209 MBps) Copying: 412/512 [MB] (203 MBps) Copying: 512/512 [MB] (average 203 MBps) 00:08:01.954 00:08:01.954 00:08:01.954 real 0m7.557s 00:08:01.954 user 0m6.535s 00:08:01.954 sys 0m0.858s 00:08:01.954 21:48:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.954 21:48:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.954 ************************************ 00:08:01.954 END TEST dd_malloc_copy 00:08:01.954 ************************************ 00:08:01.954 00:08:01.954 real 0m7.695s 00:08:01.954 user 0m6.578s 00:08:01.954 sys 0m0.955s 00:08:01.954 21:48:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.954 21:48:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:01.954 ************************************ 00:08:01.954 END TEST spdk_dd_malloc 00:08:01.954 ************************************ 00:08:01.954 21:48:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:01.954 21:48:07 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:01.954 21:48:07 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.954 21:48:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:01.954 ************************************ 00:08:01.954 START TEST spdk_dd_bdev_to_bdev 00:08:01.954 ************************************ 00:08:01.954 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:02.213 * Looking for test storage... 00:08:02.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.213 ************************************ 00:08:02.213 START TEST dd_inflate_file 00:08:02.213 ************************************ 00:08:02.213 21:48:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.213 [2024-07-24 21:48:07.757716] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:02.213 [2024-07-24 21:48:07.757823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75768 ] 00:08:02.213 [2024-07-24 21:48:07.893308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.471 [2024-07-24 21:48:07.983228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.471 [2024-07-24 21:48:08.037213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.729  Copying: 64/64 [MB] (average 1560 MBps) 00:08:02.729 00:08:02.729 00:08:02.729 real 0m0.591s 00:08:02.730 user 0m0.342s 00:08:02.730 sys 0m0.304s 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 ************************************ 00:08:02.730 END TEST dd_inflate_file 00:08:02.730 ************************************ 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 ************************************ 00:08:02.730 START TEST dd_copy_to_out_bdev 00:08:02.730 ************************************ 00:08:02.730 21:48:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:02.730 [2024-07-24 21:48:08.408756] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:02.730 [2024-07-24 21:48:08.408878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75807 ] 00:08:02.730 { 00:08:02.730 "subsystems": [ 00:08:02.730 { 00:08:02.730 "subsystem": "bdev", 00:08:02.730 "config": [ 00:08:02.730 { 00:08:02.730 "params": { 00:08:02.730 "trtype": "pcie", 00:08:02.730 "traddr": "0000:00:10.0", 00:08:02.730 "name": "Nvme0" 00:08:02.730 }, 00:08:02.730 "method": "bdev_nvme_attach_controller" 00:08:02.730 }, 00:08:02.730 { 00:08:02.730 "params": { 00:08:02.730 "trtype": "pcie", 00:08:02.730 "traddr": "0000:00:11.0", 00:08:02.730 "name": "Nvme1" 00:08:02.730 }, 00:08:02.730 "method": "bdev_nvme_attach_controller" 00:08:02.730 }, 00:08:02.730 { 00:08:02.730 "method": "bdev_wait_for_examine" 00:08:02.730 } 00:08:02.730 ] 00:08:02.730 } 00:08:02.730 ] 00:08:02.730 } 00:08:02.989 [2024-07-24 21:48:08.547783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.989 [2024-07-24 21:48:08.640847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.989 [2024-07-24 21:48:08.696201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.622  Copying: 63/64 [MB] (63 MBps) Copying: 64/64 [MB] (average 63 MBps) 00:08:04.622 00:08:04.623 ************************************ 00:08:04.623 END TEST dd_copy_to_out_bdev 00:08:04.623 ************************************ 00:08:04.623 00:08:04.623 real 0m1.759s 00:08:04.623 user 0m1.534s 00:08:04.623 sys 0m1.356s 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.623 ************************************ 00:08:04.623 START TEST dd_offset_magic 00:08:04.623 ************************************ 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:04.623 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:04.623 [2024-07-24 21:48:10.213676] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:04.623 [2024-07-24 21:48:10.213780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75847 ] 00:08:04.623 { 00:08:04.623 "subsystems": [ 00:08:04.623 { 00:08:04.623 "subsystem": "bdev", 00:08:04.623 "config": [ 00:08:04.623 { 00:08:04.623 "params": { 00:08:04.623 "trtype": "pcie", 00:08:04.623 "traddr": "0000:00:10.0", 00:08:04.623 "name": "Nvme0" 00:08:04.623 }, 00:08:04.623 "method": "bdev_nvme_attach_controller" 00:08:04.623 }, 00:08:04.623 { 00:08:04.623 "params": { 00:08:04.623 "trtype": "pcie", 00:08:04.623 "traddr": "0000:00:11.0", 00:08:04.623 "name": "Nvme1" 00:08:04.623 }, 00:08:04.623 "method": "bdev_nvme_attach_controller" 00:08:04.623 }, 00:08:04.623 { 00:08:04.623 "method": "bdev_wait_for_examine" 00:08:04.623 } 00:08:04.623 ] 00:08:04.623 } 00:08:04.623 ] 00:08:04.623 } 00:08:04.881 [2024-07-24 21:48:10.349906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.881 [2024-07-24 21:48:10.436832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.881 [2024-07-24 21:48:10.490831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.396  Copying: 65/65 [MB] (average 1031 MBps) 00:08:05.396 00:08:05.396 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:05.396 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:05.396 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:05.396 21:48:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:05.396 [2024-07-24 21:48:11.010877] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:05.396 [2024-07-24 21:48:11.010991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75861 ] 00:08:05.396 { 00:08:05.396 "subsystems": [ 00:08:05.396 { 00:08:05.396 "subsystem": "bdev", 00:08:05.396 "config": [ 00:08:05.396 { 00:08:05.396 "params": { 00:08:05.396 "trtype": "pcie", 00:08:05.396 "traddr": "0000:00:10.0", 00:08:05.396 "name": "Nvme0" 00:08:05.396 }, 00:08:05.396 "method": "bdev_nvme_attach_controller" 00:08:05.396 }, 00:08:05.396 { 00:08:05.396 "params": { 00:08:05.396 "trtype": "pcie", 00:08:05.396 "traddr": "0000:00:11.0", 00:08:05.396 "name": "Nvme1" 00:08:05.396 }, 00:08:05.396 "method": "bdev_nvme_attach_controller" 00:08:05.396 }, 00:08:05.396 { 00:08:05.396 "method": "bdev_wait_for_examine" 00:08:05.396 } 00:08:05.396 ] 00:08:05.396 } 00:08:05.396 ] 00:08:05.396 } 00:08:05.654 [2024-07-24 21:48:11.145698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.654 [2024-07-24 21:48:11.244205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.654 [2024-07-24 21:48:11.303400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.172  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.172 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.172 21:48:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.172 { 00:08:06.172 "subsystems": [ 00:08:06.172 { 00:08:06.172 "subsystem": "bdev", 00:08:06.172 "config": [ 00:08:06.172 { 00:08:06.172 "params": { 00:08:06.172 "trtype": "pcie", 00:08:06.172 "traddr": "0000:00:10.0", 00:08:06.172 "name": "Nvme0" 00:08:06.172 }, 00:08:06.172 "method": "bdev_nvme_attach_controller" 00:08:06.172 }, 00:08:06.172 { 00:08:06.172 "params": { 00:08:06.172 "trtype": "pcie", 00:08:06.172 "traddr": "0000:00:11.0", 00:08:06.172 "name": "Nvme1" 00:08:06.172 }, 00:08:06.172 "method": "bdev_nvme_attach_controller" 00:08:06.172 }, 00:08:06.172 { 00:08:06.172 "method": "bdev_wait_for_examine" 00:08:06.172 } 00:08:06.172 ] 00:08:06.172 } 00:08:06.172 ] 00:08:06.172 } 00:08:06.172 [2024-07-24 21:48:11.738012] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:06.172 [2024-07-24 21:48:11.738137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:08:06.172 [2024-07-24 21:48:11.879890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.430 [2024-07-24 21:48:11.964124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.430 [2024-07-24 21:48:12.018540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.947  Copying: 65/65 [MB] (average 970 MBps) 00:08:06.947 00:08:06.947 21:48:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:06.947 21:48:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:06.947 21:48:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.947 21:48:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.947 [2024-07-24 21:48:12.558882] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:06.947 [2024-07-24 21:48:12.558985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75903 ] 00:08:06.947 { 00:08:06.947 "subsystems": [ 00:08:06.947 { 00:08:06.947 "subsystem": "bdev", 00:08:06.947 "config": [ 00:08:06.947 { 00:08:06.947 "params": { 00:08:06.947 "trtype": "pcie", 00:08:06.947 "traddr": "0000:00:10.0", 00:08:06.947 "name": "Nvme0" 00:08:06.947 }, 00:08:06.947 "method": "bdev_nvme_attach_controller" 00:08:06.947 }, 00:08:06.947 { 00:08:06.947 "params": { 00:08:06.947 "trtype": "pcie", 00:08:06.947 "traddr": "0000:00:11.0", 00:08:06.947 "name": "Nvme1" 00:08:06.947 }, 00:08:06.947 "method": "bdev_nvme_attach_controller" 00:08:06.947 }, 00:08:06.947 { 00:08:06.947 "method": "bdev_wait_for_examine" 00:08:06.947 } 00:08:06.947 ] 00:08:06.947 } 00:08:06.947 ] 00:08:06.947 } 00:08:07.205 [2024-07-24 21:48:12.698822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.205 [2024-07-24 21:48:12.789436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.205 [2024-07-24 21:48:12.843697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:07.722  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:07.722 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:07.722 00:08:07.722 real 0m3.041s 00:08:07.722 user 0m2.184s 00:08:07.722 sys 0m0.920s 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 ************************************ 00:08:07.722 END TEST dd_offset_magic 00:08:07.722 ************************************ 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:07.722 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 [2024-07-24 21:48:13.302238] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:07.722 [2024-07-24 21:48:13.302337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75935 ] 00:08:07.722 { 00:08:07.722 "subsystems": [ 00:08:07.722 { 00:08:07.722 "subsystem": "bdev", 00:08:07.722 "config": [ 00:08:07.722 { 00:08:07.722 "params": { 00:08:07.722 "trtype": "pcie", 00:08:07.722 "traddr": "0000:00:10.0", 00:08:07.722 "name": "Nvme0" 00:08:07.722 }, 00:08:07.722 "method": "bdev_nvme_attach_controller" 00:08:07.722 }, 00:08:07.722 { 00:08:07.722 "params": { 00:08:07.722 "trtype": "pcie", 00:08:07.722 "traddr": "0000:00:11.0", 00:08:07.722 "name": "Nvme1" 00:08:07.722 }, 00:08:07.722 "method": "bdev_nvme_attach_controller" 00:08:07.722 }, 00:08:07.722 { 00:08:07.722 "method": "bdev_wait_for_examine" 00:08:07.722 } 00:08:07.722 ] 00:08:07.722 } 00:08:07.722 ] 00:08:07.722 } 00:08:07.722 [2024-07-24 21:48:13.438793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.981 [2024-07-24 21:48:13.526219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.981 [2024-07-24 21:48:13.581133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.238  Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:08.238 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:08.238 21:48:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:08.529 [2024-07-24 21:48:13.992641] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:08.529 [2024-07-24 21:48:13.992747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75950 ] 00:08:08.529 { 00:08:08.529 "subsystems": [ 00:08:08.529 { 00:08:08.529 "subsystem": "bdev", 00:08:08.529 "config": [ 00:08:08.529 { 00:08:08.529 "params": { 00:08:08.529 "trtype": "pcie", 00:08:08.529 "traddr": "0000:00:10.0", 00:08:08.529 "name": "Nvme0" 00:08:08.529 }, 00:08:08.529 "method": "bdev_nvme_attach_controller" 00:08:08.529 }, 00:08:08.529 { 00:08:08.529 "params": { 00:08:08.529 "trtype": "pcie", 00:08:08.529 "traddr": "0000:00:11.0", 00:08:08.529 "name": "Nvme1" 00:08:08.529 }, 00:08:08.529 "method": "bdev_nvme_attach_controller" 00:08:08.529 }, 00:08:08.529 { 00:08:08.529 "method": "bdev_wait_for_examine" 00:08:08.529 } 00:08:08.529 ] 00:08:08.529 } 00:08:08.529 ] 00:08:08.529 } 00:08:08.529 [2024-07-24 21:48:14.127258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.529 [2024-07-24 21:48:14.213749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.799 [2024-07-24 21:48:14.267787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.057  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:09.057 00:08:09.057 21:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:09.057 00:08:09.057 real 0m7.038s 00:08:09.057 user 0m5.117s 00:08:09.057 sys 0m3.254s 00:08:09.057 21:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.057 ************************************ 00:08:09.057 END TEST spdk_dd_bdev_to_bdev 00:08:09.057 21:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.057 ************************************ 00:08:09.057 21:48:14 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:09.057 21:48:14 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:09.057 21:48:14 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:09.057 21:48:14 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.057 21:48:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:09.057 ************************************ 00:08:09.057 START TEST spdk_dd_uring 00:08:09.057 ************************************ 00:08:09.057 21:48:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:09.316 * Looking for test storage... 00:08:09.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:09.316 ************************************ 00:08:09.316 START TEST dd_uring_copy 00:08:09.316 ************************************ 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1121 -- # uring_zram_copy 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=9e6lv3v3l7jvxsn7jo2ibp6qbv6z47nbsilad6uyp8vl5nyv4a016ujkrwm7fr8o47e3qnoq8qxkv0mx2hvn92sqsb38qjqsarbcx51f09tysk1c9aglhugkvyi6jr1n0sc2er4evawyf3b8llfecy37ac6uhqub0h5jdw7zks185zmt3q3mm4p39xcdel8mt8y5po2d9derrsdoxda3dzydi0tuslpsnlb2p1wlo6eap4y8pshc1ylw3oagr0xsm1mcxz47gsj4dnfp9jv4pqxox3hjtech464iggo2lpviomtn301ih9g18c4ikmqf0i0o0gr0ge4l74dlhaill7e58r7ik31lvdt6rmgsjx08spjk2w0g6vdr8enj3qzxnwnhcpt5muxxa3eqkxm5rnrge57rw6hw1gsd9blpunqy43wp0wg7dfvzjvsvgas1r8v4s7gsw7mjse0ulq6zawdw8nrubmlsk4li0qc6tii7lui8jgd47a4qt5jro6ja426yt5z3mwuyzpvskq926m7u6p48jlmyxrfmv50ye8wnbh5hoygnprlcjfqj5xakmtngpocwzqkt9bao02aokc10s7ws1tae7q2txxh0xdztw4aypmwtwn5pz4to4qk67jvf7g8kxo69cbbatzo3qotw05w5676tdwqmgkf08xo1tc1kmh7qvmt7r8tzr90516ulk1trp7zj9kneplddokv5r8twe52siz07loq3mbrrmkaatzp143dyba4r8zsj4q65r7gslidacvwpygc9jsp8vv2hob1xgj758lv6cw1917hz0pnhzs3znfymc1gih6hjn2o57b7vql5jvmdy9su498l3bgci0pfyotukps4rr1ehh5u5g13h1rolvp6nuorarplal9omgqrjhixo1ijnzbjxha974mwwg1wxecc50yvfsquyi0ao9mksnqfk8hx236ga7oq83cq5iblkml1h434gbocnnekbxegm4bfquri9pp9mji8p6z5yhgy5 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 9e6lv3v3l7jvxsn7jo2ibp6qbv6z47nbsilad6uyp8vl5nyv4a016ujkrwm7fr8o47e3qnoq8qxkv0mx2hvn92sqsb38qjqsarbcx51f09tysk1c9aglhugkvyi6jr1n0sc2er4evawyf3b8llfecy37ac6uhqub0h5jdw7zks185zmt3q3mm4p39xcdel8mt8y5po2d9derrsdoxda3dzydi0tuslpsnlb2p1wlo6eap4y8pshc1ylw3oagr0xsm1mcxz47gsj4dnfp9jv4pqxox3hjtech464iggo2lpviomtn301ih9g18c4ikmqf0i0o0gr0ge4l74dlhaill7e58r7ik31lvdt6rmgsjx08spjk2w0g6vdr8enj3qzxnwnhcpt5muxxa3eqkxm5rnrge57rw6hw1gsd9blpunqy43wp0wg7dfvzjvsvgas1r8v4s7gsw7mjse0ulq6zawdw8nrubmlsk4li0qc6tii7lui8jgd47a4qt5jro6ja426yt5z3mwuyzpvskq926m7u6p48jlmyxrfmv50ye8wnbh5hoygnprlcjfqj5xakmtngpocwzqkt9bao02aokc10s7ws1tae7q2txxh0xdztw4aypmwtwn5pz4to4qk67jvf7g8kxo69cbbatzo3qotw05w5676tdwqmgkf08xo1tc1kmh7qvmt7r8tzr90516ulk1trp7zj9kneplddokv5r8twe52siz07loq3mbrrmkaatzp143dyba4r8zsj4q65r7gslidacvwpygc9jsp8vv2hob1xgj758lv6cw1917hz0pnhzs3znfymc1gih6hjn2o57b7vql5jvmdy9su498l3bgci0pfyotukps4rr1ehh5u5g13h1rolvp6nuorarplal9omgqrjhixo1ijnzbjxha974mwwg1wxecc50yvfsquyi0ao9mksnqfk8hx236ga7oq83cq5iblkml1h434gbocnnekbxegm4bfquri9pp9mji8p6z5yhgy5 00:08:09.316 21:48:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:09.316 [2024-07-24 21:48:14.880293] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:09.316 [2024-07-24 21:48:14.880417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76020 ] 00:08:09.316 [2024-07-24 21:48:15.016227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.575 [2024-07-24 21:48:15.110193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.575 [2024-07-24 21:48:15.167934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.706  Copying: 511/511 [MB] (average 1145 MBps) 00:08:10.706 00:08:10.706 21:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:10.706 21:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:10.706 21:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:10.706 21:48:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.706 [2024-07-24 21:48:16.285188] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:10.706 [2024-07-24 21:48:16.285270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76042 ] 00:08:10.706 { 00:08:10.706 "subsystems": [ 00:08:10.706 { 00:08:10.706 "subsystem": "bdev", 00:08:10.706 "config": [ 00:08:10.706 { 00:08:10.706 "params": { 00:08:10.706 "block_size": 512, 00:08:10.706 "num_blocks": 1048576, 00:08:10.706 "name": "malloc0" 00:08:10.706 }, 00:08:10.706 "method": "bdev_malloc_create" 00:08:10.706 }, 00:08:10.706 { 00:08:10.706 "params": { 00:08:10.706 "filename": "/dev/zram1", 00:08:10.706 "name": "uring0" 00:08:10.706 }, 00:08:10.706 "method": "bdev_uring_create" 00:08:10.706 }, 00:08:10.706 { 00:08:10.706 "method": "bdev_wait_for_examine" 00:08:10.706 } 00:08:10.706 ] 00:08:10.706 } 00:08:10.706 ] 00:08:10.706 } 00:08:10.706 [2024-07-24 21:48:16.416223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.964 [2024-07-24 21:48:16.502116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.964 [2024-07-24 21:48:16.556897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.841  Copying: 230/512 [MB] (230 MBps) Copying: 461/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 231 MBps) 00:08:13.841 00:08:13.841 21:48:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:13.841 21:48:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:13.841 21:48:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:13.841 21:48:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.841 [2024-07-24 21:48:19.416213] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:13.841 [2024-07-24 21:48:19.416344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76086 ] 00:08:13.841 { 00:08:13.841 "subsystems": [ 00:08:13.841 { 00:08:13.841 "subsystem": "bdev", 00:08:13.841 "config": [ 00:08:13.841 { 00:08:13.841 "params": { 00:08:13.841 "block_size": 512, 00:08:13.841 "num_blocks": 1048576, 00:08:13.841 "name": "malloc0" 00:08:13.841 }, 00:08:13.841 "method": "bdev_malloc_create" 00:08:13.841 }, 00:08:13.841 { 00:08:13.841 "params": { 00:08:13.841 "filename": "/dev/zram1", 00:08:13.841 "name": "uring0" 00:08:13.841 }, 00:08:13.841 "method": "bdev_uring_create" 00:08:13.841 }, 00:08:13.841 { 00:08:13.841 "method": "bdev_wait_for_examine" 00:08:13.841 } 00:08:13.841 ] 00:08:13.841 } 00:08:13.841 ] 00:08:13.841 } 00:08:13.841 [2024-07-24 21:48:19.554975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.099 [2024-07-24 21:48:19.637200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.099 [2024-07-24 21:48:19.692347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.599  Copying: 187/512 [MB] (187 MBps) Copying: 361/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 182 MBps) 00:08:17.599 00:08:17.599 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:17.599 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 9e6lv3v3l7jvxsn7jo2ibp6qbv6z47nbsilad6uyp8vl5nyv4a016ujkrwm7fr8o47e3qnoq8qxkv0mx2hvn92sqsb38qjqsarbcx51f09tysk1c9aglhugkvyi6jr1n0sc2er4evawyf3b8llfecy37ac6uhqub0h5jdw7zks185zmt3q3mm4p39xcdel8mt8y5po2d9derrsdoxda3dzydi0tuslpsnlb2p1wlo6eap4y8pshc1ylw3oagr0xsm1mcxz47gsj4dnfp9jv4pqxox3hjtech464iggo2lpviomtn301ih9g18c4ikmqf0i0o0gr0ge4l74dlhaill7e58r7ik31lvdt6rmgsjx08spjk2w0g6vdr8enj3qzxnwnhcpt5muxxa3eqkxm5rnrge57rw6hw1gsd9blpunqy43wp0wg7dfvzjvsvgas1r8v4s7gsw7mjse0ulq6zawdw8nrubmlsk4li0qc6tii7lui8jgd47a4qt5jro6ja426yt5z3mwuyzpvskq926m7u6p48jlmyxrfmv50ye8wnbh5hoygnprlcjfqj5xakmtngpocwzqkt9bao02aokc10s7ws1tae7q2txxh0xdztw4aypmwtwn5pz4to4qk67jvf7g8kxo69cbbatzo3qotw05w5676tdwqmgkf08xo1tc1kmh7qvmt7r8tzr90516ulk1trp7zj9kneplddokv5r8twe52siz07loq3mbrrmkaatzp143dyba4r8zsj4q65r7gslidacvwpygc9jsp8vv2hob1xgj758lv6cw1917hz0pnhzs3znfymc1gih6hjn2o57b7vql5jvmdy9su498l3bgci0pfyotukps4rr1ehh5u5g13h1rolvp6nuorarplal9omgqrjhixo1ijnzbjxha974mwwg1wxecc50yvfsquyi0ao9mksnqfk8hx236ga7oq83cq5iblkml1h434gbocnnekbxegm4bfquri9pp9mji8p6z5yhgy5 == \9\e\6\l\v\3\v\3\l\7\j\v\x\s\n\7\j\o\2\i\b\p\6\q\b\v\6\z\4\7\n\b\s\i\l\a\d\6\u\y\p\8\v\l\5\n\y\v\4\a\0\1\6\u\j\k\r\w\m\7\f\r\8\o\4\7\e\3\q\n\o\q\8\q\x\k\v\0\m\x\2\h\v\n\9\2\s\q\s\b\3\8\q\j\q\s\a\r\b\c\x\5\1\f\0\9\t\y\s\k\1\c\9\a\g\l\h\u\g\k\v\y\i\6\j\r\1\n\0\s\c\2\e\r\4\e\v\a\w\y\f\3\b\8\l\l\f\e\c\y\3\7\a\c\6\u\h\q\u\b\0\h\5\j\d\w\7\z\k\s\1\8\5\z\m\t\3\q\3\m\m\4\p\3\9\x\c\d\e\l\8\m\t\8\y\5\p\o\2\d\9\d\e\r\r\s\d\o\x\d\a\3\d\z\y\d\i\0\t\u\s\l\p\s\n\l\b\2\p\1\w\l\o\6\e\a\p\4\y\8\p\s\h\c\1\y\l\w\3\o\a\g\r\0\x\s\m\1\m\c\x\z\4\7\g\s\j\4\d\n\f\p\9\j\v\4\p\q\x\o\x\3\h\j\t\e\c\h\4\6\4\i\g\g\o\2\l\p\v\i\o\m\t\n\3\0\1\i\h\9\g\1\8\c\4\i\k\m\q\f\0\i\0\o\0\g\r\0\g\e\4\l\7\4\d\l\h\a\i\l\l\7\e\5\8\r\7\i\k\3\1\l\v\d\t\6\r\m\g\s\j\x\0\8\s\p\j\k\2\w\0\g\6\v\d\r\8\e\n\j\3\q\z\x\n\w\n\h\c\p\t\5\m\u\x\x\a\3\e\q\k\x\m\5\r\n\r\g\e\5\7\r\w\6\h\w\1\g\s\d\9\b\l\p\u\n\q\y\4\3\w\p\0\w\g\7\d\f\v\z\j\v\s\v\g\a\s\1\r\8\v\4\s\7\g\s\w\7\m\j\s\e\0\u\l\q\6\z\a\w\d\w\8\n\r\u\b\m\l\s\k\4\l\i\0\q\c\6\t\i\i\7\l\u\i\8\j\g\d\4\7\a\4\q\t\5\j\r\o\6\j\a\4\2\6\y\t\5\z\3\m\w\u\y\z\p\v\s\k\q\9\2\6\m\7\u\6\p\4\8\j\l\m\y\x\r\f\m\v\5\0\y\e\8\w\n\b\h\5\h\o\y\g\n\p\r\l\c\j\f\q\j\5\x\a\k\m\t\n\g\p\o\c\w\z\q\k\t\9\b\a\o\0\2\a\o\k\c\1\0\s\7\w\s\1\t\a\e\7\q\2\t\x\x\h\0\x\d\z\t\w\4\a\y\p\m\w\t\w\n\5\p\z\4\t\o\4\q\k\6\7\j\v\f\7\g\8\k\x\o\6\9\c\b\b\a\t\z\o\3\q\o\t\w\0\5\w\5\6\7\6\t\d\w\q\m\g\k\f\0\8\x\o\1\t\c\1\k\m\h\7\q\v\m\t\7\r\8\t\z\r\9\0\5\1\6\u\l\k\1\t\r\p\7\z\j\9\k\n\e\p\l\d\d\o\k\v\5\r\8\t\w\e\5\2\s\i\z\0\7\l\o\q\3\m\b\r\r\m\k\a\a\t\z\p\1\4\3\d\y\b\a\4\r\8\z\s\j\4\q\6\5\r\7\g\s\l\i\d\a\c\v\w\p\y\g\c\9\j\s\p\8\v\v\2\h\o\b\1\x\g\j\7\5\8\l\v\6\c\w\1\9\1\7\h\z\0\p\n\h\z\s\3\z\n\f\y\m\c\1\g\i\h\6\h\j\n\2\o\5\7\b\7\v\q\l\5\j\v\m\d\y\9\s\u\4\9\8\l\3\b\g\c\i\0\p\f\y\o\t\u\k\p\s\4\r\r\1\e\h\h\5\u\5\g\1\3\h\1\r\o\l\v\p\6\n\u\o\r\a\r\p\l\a\l\9\o\m\g\q\r\j\h\i\x\o\1\i\j\n\z\b\j\x\h\a\9\7\4\m\w\w\g\1\w\x\e\c\c\5\0\y\v\f\s\q\u\y\i\0\a\o\9\m\k\s\n\q\f\k\8\h\x\2\3\6\g\a\7\o\q\8\3\c\q\5\i\b\l\k\m\l\1\h\4\3\4\g\b\o\c\n\n\e\k\b\x\e\g\m\4\b\f\q\u\r\i\9\p\p\9\m\j\i\8\p\6\z\5\y\h\g\y\5 ]] 00:08:17.599 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:17.599 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 9e6lv3v3l7jvxsn7jo2ibp6qbv6z47nbsilad6uyp8vl5nyv4a016ujkrwm7fr8o47e3qnoq8qxkv0mx2hvn92sqsb38qjqsarbcx51f09tysk1c9aglhugkvyi6jr1n0sc2er4evawyf3b8llfecy37ac6uhqub0h5jdw7zks185zmt3q3mm4p39xcdel8mt8y5po2d9derrsdoxda3dzydi0tuslpsnlb2p1wlo6eap4y8pshc1ylw3oagr0xsm1mcxz47gsj4dnfp9jv4pqxox3hjtech464iggo2lpviomtn301ih9g18c4ikmqf0i0o0gr0ge4l74dlhaill7e58r7ik31lvdt6rmgsjx08spjk2w0g6vdr8enj3qzxnwnhcpt5muxxa3eqkxm5rnrge57rw6hw1gsd9blpunqy43wp0wg7dfvzjvsvgas1r8v4s7gsw7mjse0ulq6zawdw8nrubmlsk4li0qc6tii7lui8jgd47a4qt5jro6ja426yt5z3mwuyzpvskq926m7u6p48jlmyxrfmv50ye8wnbh5hoygnprlcjfqj5xakmtngpocwzqkt9bao02aokc10s7ws1tae7q2txxh0xdztw4aypmwtwn5pz4to4qk67jvf7g8kxo69cbbatzo3qotw05w5676tdwqmgkf08xo1tc1kmh7qvmt7r8tzr90516ulk1trp7zj9kneplddokv5r8twe52siz07loq3mbrrmkaatzp143dyba4r8zsj4q65r7gslidacvwpygc9jsp8vv2hob1xgj758lv6cw1917hz0pnhzs3znfymc1gih6hjn2o57b7vql5jvmdy9su498l3bgci0pfyotukps4rr1ehh5u5g13h1rolvp6nuorarplal9omgqrjhixo1ijnzbjxha974mwwg1wxecc50yvfsquyi0ao9mksnqfk8hx236ga7oq83cq5iblkml1h434gbocnnekbxegm4bfquri9pp9mji8p6z5yhgy5 == \9\e\6\l\v\3\v\3\l\7\j\v\x\s\n\7\j\o\2\i\b\p\6\q\b\v\6\z\4\7\n\b\s\i\l\a\d\6\u\y\p\8\v\l\5\n\y\v\4\a\0\1\6\u\j\k\r\w\m\7\f\r\8\o\4\7\e\3\q\n\o\q\8\q\x\k\v\0\m\x\2\h\v\n\9\2\s\q\s\b\3\8\q\j\q\s\a\r\b\c\x\5\1\f\0\9\t\y\s\k\1\c\9\a\g\l\h\u\g\k\v\y\i\6\j\r\1\n\0\s\c\2\e\r\4\e\v\a\w\y\f\3\b\8\l\l\f\e\c\y\3\7\a\c\6\u\h\q\u\b\0\h\5\j\d\w\7\z\k\s\1\8\5\z\m\t\3\q\3\m\m\4\p\3\9\x\c\d\e\l\8\m\t\8\y\5\p\o\2\d\9\d\e\r\r\s\d\o\x\d\a\3\d\z\y\d\i\0\t\u\s\l\p\s\n\l\b\2\p\1\w\l\o\6\e\a\p\4\y\8\p\s\h\c\1\y\l\w\3\o\a\g\r\0\x\s\m\1\m\c\x\z\4\7\g\s\j\4\d\n\f\p\9\j\v\4\p\q\x\o\x\3\h\j\t\e\c\h\4\6\4\i\g\g\o\2\l\p\v\i\o\m\t\n\3\0\1\i\h\9\g\1\8\c\4\i\k\m\q\f\0\i\0\o\0\g\r\0\g\e\4\l\7\4\d\l\h\a\i\l\l\7\e\5\8\r\7\i\k\3\1\l\v\d\t\6\r\m\g\s\j\x\0\8\s\p\j\k\2\w\0\g\6\v\d\r\8\e\n\j\3\q\z\x\n\w\n\h\c\p\t\5\m\u\x\x\a\3\e\q\k\x\m\5\r\n\r\g\e\5\7\r\w\6\h\w\1\g\s\d\9\b\l\p\u\n\q\y\4\3\w\p\0\w\g\7\d\f\v\z\j\v\s\v\g\a\s\1\r\8\v\4\s\7\g\s\w\7\m\j\s\e\0\u\l\q\6\z\a\w\d\w\8\n\r\u\b\m\l\s\k\4\l\i\0\q\c\6\t\i\i\7\l\u\i\8\j\g\d\4\7\a\4\q\t\5\j\r\o\6\j\a\4\2\6\y\t\5\z\3\m\w\u\y\z\p\v\s\k\q\9\2\6\m\7\u\6\p\4\8\j\l\m\y\x\r\f\m\v\5\0\y\e\8\w\n\b\h\5\h\o\y\g\n\p\r\l\c\j\f\q\j\5\x\a\k\m\t\n\g\p\o\c\w\z\q\k\t\9\b\a\o\0\2\a\o\k\c\1\0\s\7\w\s\1\t\a\e\7\q\2\t\x\x\h\0\x\d\z\t\w\4\a\y\p\m\w\t\w\n\5\p\z\4\t\o\4\q\k\6\7\j\v\f\7\g\8\k\x\o\6\9\c\b\b\a\t\z\o\3\q\o\t\w\0\5\w\5\6\7\6\t\d\w\q\m\g\k\f\0\8\x\o\1\t\c\1\k\m\h\7\q\v\m\t\7\r\8\t\z\r\9\0\5\1\6\u\l\k\1\t\r\p\7\z\j\9\k\n\e\p\l\d\d\o\k\v\5\r\8\t\w\e\5\2\s\i\z\0\7\l\o\q\3\m\b\r\r\m\k\a\a\t\z\p\1\4\3\d\y\b\a\4\r\8\z\s\j\4\q\6\5\r\7\g\s\l\i\d\a\c\v\w\p\y\g\c\9\j\s\p\8\v\v\2\h\o\b\1\x\g\j\7\5\8\l\v\6\c\w\1\9\1\7\h\z\0\p\n\h\z\s\3\z\n\f\y\m\c\1\g\i\h\6\h\j\n\2\o\5\7\b\7\v\q\l\5\j\v\m\d\y\9\s\u\4\9\8\l\3\b\g\c\i\0\p\f\y\o\t\u\k\p\s\4\r\r\1\e\h\h\5\u\5\g\1\3\h\1\r\o\l\v\p\6\n\u\o\r\a\r\p\l\a\l\9\o\m\g\q\r\j\h\i\x\o\1\i\j\n\z\b\j\x\h\a\9\7\4\m\w\w\g\1\w\x\e\c\c\5\0\y\v\f\s\q\u\y\i\0\a\o\9\m\k\s\n\q\f\k\8\h\x\2\3\6\g\a\7\o\q\8\3\c\q\5\i\b\l\k\m\l\1\h\4\3\4\g\b\o\c\n\n\e\k\b\x\e\g\m\4\b\f\q\u\r\i\9\p\p\9\m\j\i\8\p\6\z\5\y\h\g\y\5 ]] 00:08:17.599 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:17.856 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:17.856 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:17.856 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:17.856 21:48:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:17.856 [2024-07-24 21:48:23.502017] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:17.856 [2024-07-24 21:48:23.502119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76162 ] 00:08:17.856 { 00:08:17.856 "subsystems": [ 00:08:17.856 { 00:08:17.856 "subsystem": "bdev", 00:08:17.856 "config": [ 00:08:17.856 { 00:08:17.856 "params": { 00:08:17.856 "block_size": 512, 00:08:17.856 "num_blocks": 1048576, 00:08:17.856 "name": "malloc0" 00:08:17.856 }, 00:08:17.856 "method": "bdev_malloc_create" 00:08:17.856 }, 00:08:17.856 { 00:08:17.856 "params": { 00:08:17.856 "filename": "/dev/zram1", 00:08:17.856 "name": "uring0" 00:08:17.856 }, 00:08:17.856 "method": "bdev_uring_create" 00:08:17.856 }, 00:08:17.856 { 00:08:17.856 "method": "bdev_wait_for_examine" 00:08:17.856 } 00:08:17.856 ] 00:08:17.856 } 00:08:17.856 ] 00:08:17.856 } 00:08:18.114 [2024-07-24 21:48:23.641553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.114 [2024-07-24 21:48:23.739119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.114 [2024-07-24 21:48:23.796682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.183  Copying: 152/512 [MB] (152 MBps) Copying: 307/512 [MB] (154 MBps) Copying: 462/512 [MB] (155 MBps) Copying: 512/512 [MB] (average 154 MBps) 00:08:22.183 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:22.183 21:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.184 [2024-07-24 21:48:27.768654] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:22.184 [2024-07-24 21:48:27.768782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76227 ] 00:08:22.184 { 00:08:22.184 "subsystems": [ 00:08:22.184 { 00:08:22.184 "subsystem": "bdev", 00:08:22.184 "config": [ 00:08:22.184 { 00:08:22.184 "params": { 00:08:22.184 "block_size": 512, 00:08:22.184 "num_blocks": 1048576, 00:08:22.184 "name": "malloc0" 00:08:22.184 }, 00:08:22.184 "method": "bdev_malloc_create" 00:08:22.184 }, 00:08:22.184 { 00:08:22.184 "params": { 00:08:22.184 "filename": "/dev/zram1", 00:08:22.184 "name": "uring0" 00:08:22.184 }, 00:08:22.184 "method": "bdev_uring_create" 00:08:22.184 }, 00:08:22.184 { 00:08:22.184 "params": { 00:08:22.184 "name": "uring0" 00:08:22.184 }, 00:08:22.184 "method": "bdev_uring_delete" 00:08:22.184 }, 00:08:22.184 { 00:08:22.184 "method": "bdev_wait_for_examine" 00:08:22.184 } 00:08:22.184 ] 00:08:22.184 } 00:08:22.184 ] 00:08:22.184 } 00:08:22.442 [2024-07-24 21:48:27.907435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.442 [2024-07-24 21:48:27.996427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.442 [2024-07-24 21:48:28.050704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.958  Copying: 0/0 [B] (average 0 Bps) 00:08:22.958 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.958 21:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:23.216 [2024-07-24 21:48:28.696463] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:23.216 [2024-07-24 21:48:28.696585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76256 ] 00:08:23.216 { 00:08:23.216 "subsystems": [ 00:08:23.216 { 00:08:23.216 "subsystem": "bdev", 00:08:23.216 "config": [ 00:08:23.216 { 00:08:23.216 "params": { 00:08:23.216 "block_size": 512, 00:08:23.216 "num_blocks": 1048576, 00:08:23.216 "name": "malloc0" 00:08:23.216 }, 00:08:23.216 "method": "bdev_malloc_create" 00:08:23.216 }, 00:08:23.216 { 00:08:23.216 "params": { 00:08:23.216 "filename": "/dev/zram1", 00:08:23.216 "name": "uring0" 00:08:23.216 }, 00:08:23.216 "method": "bdev_uring_create" 00:08:23.216 }, 00:08:23.216 { 00:08:23.216 "params": { 00:08:23.216 "name": "uring0" 00:08:23.216 }, 00:08:23.216 "method": "bdev_uring_delete" 00:08:23.216 }, 00:08:23.216 { 00:08:23.216 "method": "bdev_wait_for_examine" 00:08:23.216 } 00:08:23.216 ] 00:08:23.216 } 00:08:23.216 ] 00:08:23.216 } 00:08:23.216 [2024-07-24 21:48:28.834899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.216 [2024-07-24 21:48:28.931737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.480 [2024-07-24 21:48:28.985876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:23.480 [2024-07-24 21:48:29.184410] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:23.480 [2024-07-24 21:48:29.184477] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:23.480 [2024-07-24 21:48:29.184504] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:23.480 [2024-07-24 21:48:29.184515] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.048 [2024-07-24 21:48:29.493853] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:08:24.048 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:24.306 00:08:24.306 real 0m15.033s 00:08:24.306 user 0m10.017s 00:08:24.306 sys 0m12.289s 00:08:24.306 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.306 21:48:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:24.306 ************************************ 00:08:24.306 END TEST dd_uring_copy 00:08:24.306 ************************************ 00:08:24.306 00:08:24.306 real 0m15.176s 00:08:24.306 user 0m10.069s 00:08:24.306 sys 0m12.382s 00:08:24.306 21:48:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.306 21:48:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:24.306 ************************************ 00:08:24.306 END TEST spdk_dd_uring 00:08:24.306 ************************************ 00:08:24.306 21:48:29 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:24.306 21:48:29 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:24.306 21:48:29 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.306 21:48:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:24.306 ************************************ 00:08:24.306 START TEST spdk_dd_sparse 00:08:24.306 ************************************ 00:08:24.306 21:48:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:24.306 * Looking for test storage... 00:08:24.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:24.306 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:24.565 1+0 records in 00:08:24.565 1+0 records out 00:08:24.565 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00617881 s, 679 MB/s 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:24.565 1+0 records in 00:08:24.565 1+0 records out 00:08:24.565 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00785088 s, 534 MB/s 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:24.565 1+0 records in 00:08:24.565 1+0 records out 00:08:24.565 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00737115 s, 569 MB/s 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:24.565 ************************************ 00:08:24.565 START TEST dd_sparse_file_to_file 00:08:24.565 ************************************ 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:24.565 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:24.565 [2024-07-24 21:48:30.121006] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:24.565 [2024-07-24 21:48:30.121129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76342 ] 00:08:24.565 { 00:08:24.565 "subsystems": [ 00:08:24.565 { 00:08:24.565 "subsystem": "bdev", 00:08:24.565 "config": [ 00:08:24.565 { 00:08:24.565 "params": { 00:08:24.565 "block_size": 4096, 00:08:24.565 "filename": "dd_sparse_aio_disk", 00:08:24.565 "name": "dd_aio" 00:08:24.565 }, 00:08:24.565 "method": "bdev_aio_create" 00:08:24.565 }, 00:08:24.565 { 00:08:24.565 "params": { 00:08:24.565 "lvs_name": "dd_lvstore", 00:08:24.565 "bdev_name": "dd_aio" 00:08:24.565 }, 00:08:24.565 "method": "bdev_lvol_create_lvstore" 00:08:24.565 }, 00:08:24.565 { 00:08:24.565 "method": "bdev_wait_for_examine" 00:08:24.565 } 00:08:24.565 ] 00:08:24.565 } 00:08:24.565 ] 00:08:24.565 } 00:08:24.565 [2024-07-24 21:48:30.259810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.823 [2024-07-24 21:48:30.357050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.823 [2024-07-24 21:48:30.411042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.082  Copying: 12/36 [MB] (average 1000 MBps) 00:08:25.082 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:25.082 00:08:25.082 real 0m0.714s 00:08:25.082 user 0m0.437s 00:08:25.082 sys 0m0.381s 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.082 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:25.082 ************************************ 00:08:25.082 END TEST dd_sparse_file_to_file 00:08:25.082 ************************************ 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:25.341 ************************************ 00:08:25.341 START TEST dd_sparse_file_to_bdev 00:08:25.341 ************************************ 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:25.341 21:48:30 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.341 [2024-07-24 21:48:30.880191] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:25.341 [2024-07-24 21:48:30.880290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76390 ] 00:08:25.341 { 00:08:25.341 "subsystems": [ 00:08:25.341 { 00:08:25.341 "subsystem": "bdev", 00:08:25.341 "config": [ 00:08:25.341 { 00:08:25.341 "params": { 00:08:25.341 "block_size": 4096, 00:08:25.341 "filename": "dd_sparse_aio_disk", 00:08:25.341 "name": "dd_aio" 00:08:25.341 }, 00:08:25.341 "method": "bdev_aio_create" 00:08:25.341 }, 00:08:25.341 { 00:08:25.341 "params": { 00:08:25.341 "lvs_name": "dd_lvstore", 00:08:25.341 "lvol_name": "dd_lvol", 00:08:25.341 "size_in_mib": 36, 00:08:25.341 "thin_provision": true 00:08:25.341 }, 00:08:25.341 "method": "bdev_lvol_create" 00:08:25.341 }, 00:08:25.341 { 00:08:25.341 "method": "bdev_wait_for_examine" 00:08:25.341 } 00:08:25.341 ] 00:08:25.341 } 00:08:25.341 ] 00:08:25.341 } 00:08:25.341 [2024-07-24 21:48:31.016013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.600 [2024-07-24 21:48:31.112603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.600 [2024-07-24 21:48:31.168666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.861  Copying: 12/36 [MB] (average 521 MBps) 00:08:25.861 00:08:25.861 00:08:25.861 real 0m0.641s 00:08:25.861 user 0m0.410s 00:08:25.861 sys 0m0.337s 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.861 ************************************ 00:08:25.861 END TEST dd_sparse_file_to_bdev 00:08:25.861 ************************************ 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 ************************************ 00:08:25.861 START TEST dd_sparse_bdev_to_file 00:08:25.861 ************************************ 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:25.861 21:48:31 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.120 [2024-07-24 21:48:31.578684] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:26.120 [2024-07-24 21:48:31.578790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76423 ] 00:08:26.120 { 00:08:26.120 "subsystems": [ 00:08:26.120 { 00:08:26.120 "subsystem": "bdev", 00:08:26.120 "config": [ 00:08:26.120 { 00:08:26.120 "params": { 00:08:26.120 "block_size": 4096, 00:08:26.120 "filename": "dd_sparse_aio_disk", 00:08:26.120 "name": "dd_aio" 00:08:26.120 }, 00:08:26.120 "method": "bdev_aio_create" 00:08:26.120 }, 00:08:26.120 { 00:08:26.120 "method": "bdev_wait_for_examine" 00:08:26.120 } 00:08:26.120 ] 00:08:26.120 } 00:08:26.120 ] 00:08:26.120 } 00:08:26.120 [2024-07-24 21:48:31.716368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.120 [2024-07-24 21:48:31.810607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.377 [2024-07-24 21:48:31.863927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.634  Copying: 12/36 [MB] (average 1200 MBps) 00:08:26.634 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:26.634 00:08:26.634 real 0m0.653s 00:08:26.634 user 0m0.407s 00:08:26.634 sys 0m0.337s 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.634 ************************************ 00:08:26.634 END TEST dd_sparse_bdev_to_file 00:08:26.634 ************************************ 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:26.634 00:08:26.634 real 0m2.314s 00:08:26.634 user 0m1.371s 00:08:26.634 sys 0m1.232s 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.634 21:48:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:26.634 ************************************ 00:08:26.634 END TEST spdk_dd_sparse 00:08:26.634 ************************************ 00:08:26.634 21:48:32 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.634 21:48:32 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.634 21:48:32 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.634 21:48:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.634 ************************************ 00:08:26.634 START TEST spdk_dd_negative 00:08:26.634 ************************************ 00:08:26.634 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.893 * Looking for test storage... 00:08:26.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.893 ************************************ 00:08:26.893 START TEST dd_invalid_arguments 00:08:26.893 ************************************ 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.893 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.893 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:26.893 00:08:26.893 CPU options: 00:08:26.893 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:26.893 (like [0,1,10]) 00:08:26.893 --lcores lcore to CPU mapping list. The list is in the format: 00:08:26.893 [<,lcores[@CPUs]>...] 00:08:26.893 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:26.893 Within the group, '-' is used for range separator, 00:08:26.893 ',' is used for single number separator. 00:08:26.893 '( )' can be omitted for single element group, 00:08:26.893 '@' can be omitted if cpus and lcores have the same value 00:08:26.893 --disable-cpumask-locks Disable CPU core lock files. 00:08:26.893 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:26.893 pollers in the app support interrupt mode) 00:08:26.893 -p, --main-core main (primary) core for DPDK 00:08:26.893 00:08:26.893 Configuration options: 00:08:26.893 -c, --config, --json JSON config file 00:08:26.893 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:26.893 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:26.893 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:26.893 --rpcs-allowed comma-separated list of permitted RPCS 00:08:26.893 --json-ignore-init-errors don't exit on invalid config entry 00:08:26.893 00:08:26.893 Memory options: 00:08:26.893 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:26.893 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:26.893 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:26.893 -R, --huge-unlink unlink huge files after initialization 00:08:26.893 -n, --mem-channels number of memory channels used for DPDK 00:08:26.893 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:26.893 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:26.893 --no-huge run without using hugepages 00:08:26.893 -i, --shm-id shared memory ID (optional) 00:08:26.893 -g, --single-file-segments force creating just one hugetlbfs file 00:08:26.893 00:08:26.893 PCI options: 00:08:26.893 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:26.893 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:26.893 -u, --no-pci disable PCI access 00:08:26.893 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:26.893 00:08:26.893 Log options: 00:08:26.893 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:26.893 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:26.893 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:26.893 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:26.893 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:26.893 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:26.893 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:26.893 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:26.893 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:26.893 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:26.893 virtio_vfio_user, vmd) 00:08:26.893 --silence-noticelog disable notice level logging to stderr 00:08:26.893 00:08:26.893 Trace options: 00:08:26.893 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:26.893 setting 0 to disable trace (default 32768) 00:08:26.893 Tracepoints vary in size and can use more than one trace entry. 00:08:26.893 -e, --tpoint-group [:] 00:08:26.893 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:26.893 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:26.893 [2024-07-24 21:48:32.451762] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:26.893 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:26.893 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:26.893 a tracepoint group. First tpoint inside a group can be enabled by 00:08:26.893 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:26.894 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:26.894 in /include/spdk_internal/trace_defs.h 00:08:26.894 00:08:26.894 Other options: 00:08:26.894 -h, --help show this usage 00:08:26.894 -v, --version print SPDK version 00:08:26.894 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:26.894 --env-context Opaque context for use of the env implementation 00:08:26.894 00:08:26.894 Application specific: 00:08:26.894 [--------- DD Options ---------] 00:08:26.894 --if Input file. Must specify either --if or --ib. 00:08:26.894 --ib Input bdev. Must specifier either --if or --ib 00:08:26.894 --of Output file. Must specify either --of or --ob. 00:08:26.894 --ob Output bdev. Must specify either --of or --ob. 00:08:26.894 --iflag Input file flags. 00:08:26.894 --oflag Output file flags. 00:08:26.894 --bs I/O unit size (default: 4096) 00:08:26.894 --qd Queue depth (default: 2) 00:08:26.894 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:26.894 --skip Skip this many I/O units at start of input. (default: 0) 00:08:26.894 --seek Skip this many I/O units at start of output. (default: 0) 00:08:26.894 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:26.894 --sparse Enable hole skipping in input target 00:08:26.894 Available iflag and oflag values: 00:08:26.894 append - append mode 00:08:26.894 direct - use direct I/O for data 00:08:26.894 directory - fail unless a directory 00:08:26.894 dsync - use synchronized I/O for data 00:08:26.894 noatime - do not update access time 00:08:26.894 noctty - do not assign controlling terminal from file 00:08:26.894 nofollow - do not follow symlinks 00:08:26.894 nonblock - use non-blocking I/O 00:08:26.894 sync - use synchronized I/O for data and metadata 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:26.894 00:08:26.894 real 0m0.071s 00:08:26.894 user 0m0.045s 00:08:26.894 sys 0m0.024s 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:26.894 ************************************ 00:08:26.894 END TEST dd_invalid_arguments 00:08:26.894 ************************************ 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.894 ************************************ 00:08:26.894 START TEST dd_double_input 00:08:26.894 ************************************ 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.894 [2024-07-24 21:48:32.570893] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:26.894 00:08:26.894 real 0m0.068s 00:08:26.894 user 0m0.042s 00:08:26.894 sys 0m0.024s 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.894 21:48:32 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:26.894 ************************************ 00:08:26.894 END TEST dd_double_input 00:08:26.894 ************************************ 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.152 ************************************ 00:08:27.152 START TEST dd_double_output 00:08:27.152 ************************************ 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.152 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.153 [2024-07-24 21:48:32.694310] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.153 00:08:27.153 real 0m0.077s 00:08:27.153 user 0m0.048s 00:08:27.153 sys 0m0.028s 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:27.153 ************************************ 00:08:27.153 END TEST dd_double_output 00:08:27.153 ************************************ 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.153 ************************************ 00:08:27.153 START TEST dd_no_input 00:08:27.153 ************************************ 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.153 [2024-07-24 21:48:32.814976] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.153 00:08:27.153 real 0m0.070s 00:08:27.153 user 0m0.045s 00:08:27.153 sys 0m0.024s 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.153 21:48:32 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:27.153 ************************************ 00:08:27.153 END TEST dd_no_input 00:08:27.153 ************************************ 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.412 ************************************ 00:08:27.412 START TEST dd_no_output 00:08:27.412 ************************************ 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.412 [2024-07-24 21:48:32.929996] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.412 00:08:27.412 real 0m0.068s 00:08:27.412 user 0m0.048s 00:08:27.412 sys 0m0.019s 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:27.412 ************************************ 00:08:27.412 END TEST dd_no_output 00:08:27.412 ************************************ 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.412 ************************************ 00:08:27.412 START TEST dd_wrong_blocksize 00:08:27.412 ************************************ 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.412 21:48:32 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.412 [2024-07-24 21:48:33.043610] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.412 00:08:27.412 real 0m0.064s 00:08:27.412 user 0m0.038s 00:08:27.412 sys 0m0.026s 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:27.412 ************************************ 00:08:27.412 END TEST dd_wrong_blocksize 00:08:27.412 ************************************ 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.412 ************************************ 00:08:27.412 START TEST dd_smaller_blocksize 00:08:27.412 ************************************ 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.412 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.413 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.413 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.413 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.671 [2024-07-24 21:48:33.162993] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:27.671 [2024-07-24 21:48:33.163101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76641 ] 00:08:27.671 [2024-07-24 21:48:33.300056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.931 [2024-07-24 21:48:33.391169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.931 [2024-07-24 21:48:33.445002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.931 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:27.931 [2024-07-24 21:48:33.474795] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:27.931 [2024-07-24 21:48:33.474825] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.931 [2024-07-24 21:48:33.584119] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.205 00:08:28.205 real 0m0.562s 00:08:28.205 user 0m0.315s 00:08:28.205 sys 0m0.141s 00:08:28.205 ************************************ 00:08:28.205 END TEST dd_smaller_blocksize 00:08:28.205 ************************************ 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.205 ************************************ 00:08:28.205 START TEST dd_invalid_count 00:08:28.205 ************************************ 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.205 [2024-07-24 21:48:33.776845] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.205 00:08:28.205 real 0m0.072s 00:08:28.205 user 0m0.044s 00:08:28.205 sys 0m0.026s 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.205 ************************************ 00:08:28.205 END TEST dd_invalid_count 00:08:28.205 ************************************ 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.205 ************************************ 00:08:28.205 START TEST dd_invalid_oflag 00:08:28.205 ************************************ 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.205 [2024-07-24 21:48:33.897397] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.205 ************************************ 00:08:28.205 END TEST dd_invalid_oflag 00:08:28.205 ************************************ 00:08:28.205 00:08:28.205 real 0m0.072s 00:08:28.205 user 0m0.051s 00:08:28.205 sys 0m0.020s 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.205 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 ************************************ 00:08:28.466 START TEST dd_invalid_iflag 00:08:28.466 ************************************ 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.466 21:48:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.466 [2024-07-24 21:48:34.017878] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.466 00:08:28.466 real 0m0.070s 00:08:28.466 user 0m0.043s 00:08:28.466 sys 0m0.027s 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 ************************************ 00:08:28.466 END TEST dd_invalid_iflag 00:08:28.466 ************************************ 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 ************************************ 00:08:28.466 START TEST dd_unknown_flag 00:08:28.466 ************************************ 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.466 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.466 [2024-07-24 21:48:34.136452] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:28.466 [2024-07-24 21:48:34.136539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76733 ] 00:08:28.724 [2024-07-24 21:48:34.267957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.724 [2024-07-24 21:48:34.349533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.724 [2024-07-24 21:48:34.404238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.724 [2024-07-24 21:48:34.434432] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:28.724 [2024-07-24 21:48:34.434501] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.724 [2024-07-24 21:48:34.434574] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:28.724 [2024-07-24 21:48:34.434587] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.724 [2024-07-24 21:48:34.434820] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:28.724 [2024-07-24 21:48:34.434838] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.724 [2024-07-24 21:48:34.434888] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:28.724 [2024-07-24 21:48:34.434898] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:28.982 [2024-07-24 21:48:34.545833] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:28.982 00:08:28.982 real 0m0.546s 00:08:28.982 user 0m0.290s 00:08:28.982 sys 0m0.162s 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:28.982 ************************************ 00:08:28.982 END TEST dd_unknown_flag 00:08:28.982 ************************************ 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.982 ************************************ 00:08:28.982 START TEST dd_invalid_json 00:08:28.982 ************************************ 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.982 21:48:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.240 [2024-07-24 21:48:34.737907] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:29.240 [2024-07-24 21:48:34.738032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76762 ] 00:08:29.240 [2024-07-24 21:48:34.873221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.498 [2024-07-24 21:48:34.961170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.498 [2024-07-24 21:48:34.961244] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:29.498 [2024-07-24 21:48:34.961262] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:29.498 [2024-07-24 21:48:34.961271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.498 [2024-07-24 21:48:34.961308] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.498 00:08:29.498 real 0m0.365s 00:08:29.498 user 0m0.181s 00:08:29.498 sys 0m0.082s 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 ************************************ 00:08:29.498 END TEST dd_invalid_json 00:08:29.498 ************************************ 00:08:29.498 00:08:29.498 real 0m2.796s 00:08:29.498 user 0m1.416s 00:08:29.498 sys 0m1.028s 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.498 21:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 ************************************ 00:08:29.498 END TEST spdk_dd_negative 00:08:29.498 ************************************ 00:08:29.498 ************************************ 00:08:29.498 END TEST spdk_dd 00:08:29.498 ************************************ 00:08:29.498 00:08:29.498 real 1m15.175s 00:08:29.498 user 0m48.240s 00:08:29.498 sys 0m32.654s 00:08:29.498 21:48:35 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.498 21:48:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 21:48:35 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:29.498 21:48:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.498 21:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.498 21:48:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:29.498 21:48:35 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:29.498 21:48:35 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.498 21:48:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:29.498 21:48:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.498 21:48:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.757 ************************************ 00:08:29.757 START TEST nvmf_tcp 00:08:29.757 ************************************ 00:08:29.757 21:48:35 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.757 * Looking for test storage... 00:08:29.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.757 21:48:35 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.757 21:48:35 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.757 21:48:35 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.757 21:48:35 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.757 21:48:35 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.757 21:48:35 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.757 21:48:35 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:29.758 21:48:35 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:29.758 21:48:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:29.758 21:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:29.758 21:48:35 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:29.758 21:48:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:29.758 21:48:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.758 21:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.758 ************************************ 00:08:29.758 START TEST nvmf_host_management 00:08:29.758 ************************************ 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:29.758 * Looking for test storage... 00:08:29.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:29.758 Cannot find device "nvmf_init_br" 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:29.758 Cannot find device "nvmf_tgt_br" 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:29.758 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.017 Cannot find device "nvmf_tgt_br2" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:30.017 Cannot find device "nvmf_init_br" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:30.017 Cannot find device "nvmf_tgt_br" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:30.017 Cannot find device "nvmf_tgt_br2" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:30.017 Cannot find device "nvmf_br" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:30.017 Cannot find device "nvmf_init_if" 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.017 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:30.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:30.276 00:08:30.276 --- 10.0.0.2 ping statistics --- 00:08:30.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.276 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:30.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:30.276 00:08:30.276 --- 10.0.0.3 ping statistics --- 00:08:30.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.276 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:30.276 00:08:30.276 --- 10.0.0.1 ping statistics --- 00:08:30.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.276 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77025 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77025 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 77025 ']' 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.276 21:48:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.276 [2024-07-24 21:48:35.914672] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:30.276 [2024-07-24 21:48:35.914760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.534 [2024-07-24 21:48:36.055706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.534 [2024-07-24 21:48:36.148523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.534 [2024-07-24 21:48:36.148594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.534 [2024-07-24 21:48:36.148624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.534 [2024-07-24 21:48:36.148636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.534 [2024-07-24 21:48:36.148646] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.534 [2024-07-24 21:48:36.148826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.534 [2024-07-24 21:48:36.149002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:30.534 [2024-07-24 21:48:36.149007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.534 [2024-07-24 21:48:36.148874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.535 [2024-07-24 21:48:36.206205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.469 21:48:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.470 [2024-07-24 21:48:36.971545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.470 21:48:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.470 Malloc0 00:08:31.470 [2024-07-24 21:48:37.051203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77080 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77080 /var/tmp/bdevperf.sock 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 77080 ']' 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:31.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:31.470 { 00:08:31.470 "params": { 00:08:31.470 "name": "Nvme$subsystem", 00:08:31.470 "trtype": "$TEST_TRANSPORT", 00:08:31.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.470 "adrfam": "ipv4", 00:08:31.470 "trsvcid": "$NVMF_PORT", 00:08:31.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.470 "hdgst": ${hdgst:-false}, 00:08:31.470 "ddgst": ${ddgst:-false} 00:08:31.470 }, 00:08:31.470 "method": "bdev_nvme_attach_controller" 00:08:31.470 } 00:08:31.470 EOF 00:08:31.470 )") 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:31.470 21:48:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:31.470 "params": { 00:08:31.470 "name": "Nvme0", 00:08:31.470 "trtype": "tcp", 00:08:31.470 "traddr": "10.0.0.2", 00:08:31.470 "adrfam": "ipv4", 00:08:31.470 "trsvcid": "4420", 00:08:31.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:31.470 "hdgst": false, 00:08:31.470 "ddgst": false 00:08:31.470 }, 00:08:31.470 "method": "bdev_nvme_attach_controller" 00:08:31.470 }' 00:08:31.470 [2024-07-24 21:48:37.152112] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:31.470 [2024-07-24 21:48:37.152209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77080 ] 00:08:31.729 [2024-07-24 21:48:37.293142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.729 [2024-07-24 21:48:37.386548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.987 [2024-07-24 21:48:37.451276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.987 Running I/O for 10 seconds... 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 21:48:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:32.557 [2024-07-24 21:48:38.245870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.245919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.245945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.245956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.245968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.245977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.245989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.245998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.557 [2024-07-24 21:48:38.246118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.557 [2024-07-24 21:48:38.246128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.558 [2024-07-24 21:48:38.246757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.558 [2024-07-24 21:48:38.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.246984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.246993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.559 [2024-07-24 21:48:38.247272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec89e0 is same with the state(5) to be set 00:08:32.559 [2024-07-24 21:48:38.247353] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec89e0 was disconnected and freed. reset controller. 00:08:32.559 [2024-07-24 21:48:38.247442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.559 [2024-07-24 21:48:38.247459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.559 [2024-07-24 21:48:38.247479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.559 [2024-07-24 21:48:38.247497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.559 [2024-07-24 21:48:38.247507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.559 [2024-07-24 21:48:38.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:32.560 [2024-07-24 21:48:38.247525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec9570 is same with the state(5) to be set 00:08:32.560 [2024-07-24 21:48:38.248617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:32.560 task offset: 0 on job bdev=Nvme0n1 fails 00:08:32.560 00:08:32.560 Latency(us) 00:08:32.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.560 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:32.560 Job: Nvme0n1 ended in about 0.68 seconds with error 00:08:32.560 Verification LBA range: start 0x0 length 0x400 00:08:32.560 Nvme0n1 : 0.68 1497.14 93.57 93.57 0.00 39245.08 2100.13 37891.72 00:08:32.560 =================================================================================================================== 00:08:32.560 Total : 1497.14 93.57 93.57 0.00 39245.08 2100.13 37891.72 00:08:32.560 [2024-07-24 21:48:38.250519] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.560 [2024-07-24 21:48:38.250543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9570 (9): Bad file descriptor 00:08:32.560 [2024-07-24 21:48:38.253454] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77080 00:08:33.932 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77080) - No such process 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:33.932 { 00:08:33.932 "params": { 00:08:33.932 "name": "Nvme$subsystem", 00:08:33.932 "trtype": "$TEST_TRANSPORT", 00:08:33.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.932 "adrfam": "ipv4", 00:08:33.932 "trsvcid": "$NVMF_PORT", 00:08:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.932 "hdgst": ${hdgst:-false}, 00:08:33.932 "ddgst": ${ddgst:-false} 00:08:33.932 }, 00:08:33.932 "method": "bdev_nvme_attach_controller" 00:08:33.932 } 00:08:33.932 EOF 00:08:33.932 )") 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:33.932 21:48:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:33.932 "params": { 00:08:33.932 "name": "Nvme0", 00:08:33.932 "trtype": "tcp", 00:08:33.932 "traddr": "10.0.0.2", 00:08:33.932 "adrfam": "ipv4", 00:08:33.932 "trsvcid": "4420", 00:08:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:33.932 "hdgst": false, 00:08:33.932 "ddgst": false 00:08:33.932 }, 00:08:33.932 "method": "bdev_nvme_attach_controller" 00:08:33.932 }' 00:08:33.932 [2024-07-24 21:48:39.304359] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:33.932 [2024-07-24 21:48:39.304461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77118 ] 00:08:33.932 [2024-07-24 21:48:39.442652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.932 [2024-07-24 21:48:39.529726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.932 [2024-07-24 21:48:39.590366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.250 Running I/O for 1 seconds... 00:08:35.192 00:08:35.192 Latency(us) 00:08:35.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.192 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:35.192 Verification LBA range: start 0x0 length 0x400 00:08:35.192 Nvme0n1 : 1.02 1564.20 97.76 0.00 0.00 40117.48 4051.32 37891.72 00:08:35.192 =================================================================================================================== 00:08:35.192 Total : 1564.20 97.76 0.00 0.00 40117.48 4051.32 37891.72 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.450 21:48:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.451 rmmod nvme_tcp 00:08:35.451 rmmod nvme_fabrics 00:08:35.451 rmmod nvme_keyring 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77025 ']' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77025 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 77025 ']' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 77025 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77025 00:08:35.451 killing process with pid 77025 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77025' 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 77025 00:08:35.451 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 77025 00:08:35.709 [2024-07-24 21:48:41.335101] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.709 21:48:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.710 21:48:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:35.710 00:08:35.710 real 0m6.066s 00:08:35.710 user 0m23.493s 00:08:35.710 sys 0m1.510s 00:08:35.710 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.710 21:48:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.710 ************************************ 00:08:35.710 END TEST nvmf_host_management 00:08:35.710 ************************************ 00:08:35.970 21:48:41 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:35.970 21:48:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.970 21:48:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.970 21:48:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.970 ************************************ 00:08:35.970 START TEST nvmf_lvol 00:08:35.970 ************************************ 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:35.970 * Looking for test storage... 00:08:35.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.970 Cannot find device "nvmf_tgt_br" 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.970 Cannot find device "nvmf_tgt_br2" 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.970 Cannot find device "nvmf_tgt_br" 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.970 Cannot find device "nvmf_tgt_br2" 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.970 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:36.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:36.230 00:08:36.230 --- 10.0.0.2 ping statistics --- 00:08:36.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.230 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:36.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:36.230 00:08:36.230 --- 10.0.0.3 ping statistics --- 00:08:36.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.230 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:36.230 00:08:36.230 --- 10.0.0.1 ping statistics --- 00:08:36.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.230 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=77338 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 77338 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 77338 ']' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:36.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:36.230 21:48:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:36.230 [2024-07-24 21:48:41.937931] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:36.230 [2024-07-24 21:48:41.938055] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.488 [2024-07-24 21:48:42.081679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.488 [2024-07-24 21:48:42.178534] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.488 [2024-07-24 21:48:42.179055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.488 [2024-07-24 21:48:42.179492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.488 [2024-07-24 21:48:42.179982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.488 [2024-07-24 21:48:42.180287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.488 [2024-07-24 21:48:42.180699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.488 [2024-07-24 21:48:42.180844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.488 [2024-07-24 21:48:42.180850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.746 [2024-07-24 21:48:42.241203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.311 21:48:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:37.568 [2024-07-24 21:48:43.203733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.568 21:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:37.823 21:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:37.823 21:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.080 21:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:38.080 21:48:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:38.338 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:38.595 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a6a2421c-853b-44f2-8ce3-ec0004b269b5 00:08:38.595 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a6a2421c-853b-44f2-8ce3-ec0004b269b5 lvol 20 00:08:38.852 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=051cd57a-71c2-4c19-aad6-3bef4964a507 00:08:38.852 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:39.417 21:48:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 051cd57a-71c2-4c19-aad6-3bef4964a507 00:08:39.417 21:48:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.673 [2024-07-24 21:48:45.319365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.673 21:48:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.930 21:48:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:39.930 21:48:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77413 00:08:39.930 21:48:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:40.861 21:48:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 051cd57a-71c2-4c19-aad6-3bef4964a507 MY_SNAPSHOT 00:08:41.426 21:48:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a0f8b3e1-9ef8-481d-90a9-42b6e9735d86 00:08:41.426 21:48:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 051cd57a-71c2-4c19-aad6-3bef4964a507 30 00:08:41.683 21:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a0f8b3e1-9ef8-481d-90a9-42b6e9735d86 MY_CLONE 00:08:41.940 21:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e3e39290-feba-4b9d-a59f-2d0034d1cd71 00:08:41.940 21:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e3e39290-feba-4b9d-a59f-2d0034d1cd71 00:08:42.530 21:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77413 00:08:50.638 Initializing NVMe Controllers 00:08:50.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.638 Controller IO queue size 128, less than required. 00:08:50.638 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:50.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:50.638 Initialization complete. Launching workers. 00:08:50.638 ======================================================== 00:08:50.638 Latency(us) 00:08:50.638 Device Information : IOPS MiB/s Average min max 00:08:50.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10685.40 41.74 11980.09 2360.56 52927.54 00:08:50.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10575.00 41.31 12103.33 3010.95 56650.43 00:08:50.638 ======================================================== 00:08:50.638 Total : 21260.40 83.05 12041.39 2360.56 56650.43 00:08:50.638 00:08:50.638 21:48:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.638 21:48:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 051cd57a-71c2-4c19-aad6-3bef4964a507 00:08:50.896 21:48:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6a2421c-853b-44f2-8ce3-ec0004b269b5 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.154 rmmod nvme_tcp 00:08:51.154 rmmod nvme_fabrics 00:08:51.154 rmmod nvme_keyring 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 77338 ']' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 77338 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 77338 ']' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 77338 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77338 00:08:51.154 killing process with pid 77338 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77338' 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 77338 00:08:51.154 21:48:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 77338 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:51.412 ************************************ 00:08:51.412 END TEST nvmf_lvol 00:08:51.412 ************************************ 00:08:51.412 00:08:51.412 real 0m15.658s 00:08:51.412 user 1m5.419s 00:08:51.412 sys 0m4.171s 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:51.412 21:48:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:51.671 21:48:57 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:51.671 21:48:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:51.671 21:48:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.671 21:48:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.671 ************************************ 00:08:51.671 START TEST nvmf_lvs_grow 00:08:51.671 ************************************ 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:51.671 * Looking for test storage... 00:08:51.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:51.671 Cannot find device "nvmf_tgt_br" 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.671 Cannot find device "nvmf_tgt_br2" 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:51.671 Cannot find device "nvmf_tgt_br" 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:51.671 Cannot find device "nvmf_tgt_br2" 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:51.671 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:51.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:08:51.930 00:08:51.930 --- 10.0.0.2 ping statistics --- 00:08:51.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.930 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:51.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:51.930 00:08:51.930 --- 10.0.0.3 ping statistics --- 00:08:51.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.930 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:51.930 00:08:51.930 --- 10.0.0.1 ping statistics --- 00:08:51.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.930 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=77733 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.930 21:48:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 77733 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 77733 ']' 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:51.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:51.931 21:48:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.189 [2024-07-24 21:48:57.648340] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:52.189 [2024-07-24 21:48:57.648420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.189 [2024-07-24 21:48:57.785386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.189 [2024-07-24 21:48:57.884501] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.189 [2024-07-24 21:48:57.884564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.189 [2024-07-24 21:48:57.884580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.189 [2024-07-24 21:48:57.884591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.189 [2024-07-24 21:48:57.884599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.189 [2024-07-24 21:48:57.884655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.447 [2024-07-24 21:48:57.943421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.065 21:48:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.337 [2024-07-24 21:48:58.971992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.337 21:48:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:53.337 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:53.337 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.337 21:48:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.337 ************************************ 00:08:53.337 START TEST lvs_grow_clean 00:08:53.337 ************************************ 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.338 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:53.596 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:53.596 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c62f284-9d8a-44ef-9864-fbc170fd8995 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.162 21:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 lvol 150 00:08:54.420 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2a1180e5-de21-4139-9e95-63d72eac6a63 00:08:54.420 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:54.420 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:54.679 [2024-07-24 21:49:00.284677] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:54.679 [2024-07-24 21:49:00.284777] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:54.679 true 00:08:54.679 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:08:54.679 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:54.937 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:54.937 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.195 21:49:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a1180e5-de21-4139-9e95-63d72eac6a63 00:08:55.453 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:55.712 [2024-07-24 21:49:01.297213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.712 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77821 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77821 /var/tmp/bdevperf.sock 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 77821 ']' 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:55.970 21:49:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:55.970 [2024-07-24 21:49:01.593272] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:55.970 [2024-07-24 21:49:01.593357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77821 ] 00:08:56.228 [2024-07-24 21:49:01.729871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.228 [2024-07-24 21:49:01.815983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.228 [2024-07-24 21:49:01.875264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.163 21:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:57.163 21:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:08:57.163 21:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:57.163 Nvme0n1 00:08:57.163 21:49:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.441 [ 00:08:57.441 { 00:08:57.441 "name": "Nvme0n1", 00:08:57.441 "aliases": [ 00:08:57.441 "2a1180e5-de21-4139-9e95-63d72eac6a63" 00:08:57.441 ], 00:08:57.441 "product_name": "NVMe disk", 00:08:57.441 "block_size": 4096, 00:08:57.441 "num_blocks": 38912, 00:08:57.441 "uuid": "2a1180e5-de21-4139-9e95-63d72eac6a63", 00:08:57.441 "assigned_rate_limits": { 00:08:57.441 "rw_ios_per_sec": 0, 00:08:57.441 "rw_mbytes_per_sec": 0, 00:08:57.441 "r_mbytes_per_sec": 0, 00:08:57.441 "w_mbytes_per_sec": 0 00:08:57.441 }, 00:08:57.441 "claimed": false, 00:08:57.441 "zoned": false, 00:08:57.441 "supported_io_types": { 00:08:57.441 "read": true, 00:08:57.441 "write": true, 00:08:57.441 "unmap": true, 00:08:57.441 "write_zeroes": true, 00:08:57.441 "flush": true, 00:08:57.441 "reset": true, 00:08:57.441 "compare": true, 00:08:57.441 "compare_and_write": true, 00:08:57.441 "abort": true, 00:08:57.441 "nvme_admin": true, 00:08:57.441 "nvme_io": true 00:08:57.441 }, 00:08:57.441 "memory_domains": [ 00:08:57.441 { 00:08:57.441 "dma_device_id": "system", 00:08:57.441 "dma_device_type": 1 00:08:57.441 } 00:08:57.441 ], 00:08:57.441 "driver_specific": { 00:08:57.441 "nvme": [ 00:08:57.441 { 00:08:57.441 "trid": { 00:08:57.441 "trtype": "TCP", 00:08:57.441 "adrfam": "IPv4", 00:08:57.441 "traddr": "10.0.0.2", 00:08:57.441 "trsvcid": "4420", 00:08:57.441 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.441 }, 00:08:57.441 "ctrlr_data": { 00:08:57.441 "cntlid": 1, 00:08:57.441 "vendor_id": "0x8086", 00:08:57.441 "model_number": "SPDK bdev Controller", 00:08:57.441 "serial_number": "SPDK0", 00:08:57.441 "firmware_revision": "24.05.1", 00:08:57.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.441 "oacs": { 00:08:57.441 "security": 0, 00:08:57.441 "format": 0, 00:08:57.441 "firmware": 0, 00:08:57.441 "ns_manage": 0 00:08:57.441 }, 00:08:57.441 "multi_ctrlr": true, 00:08:57.441 "ana_reporting": false 00:08:57.441 }, 00:08:57.441 "vs": { 00:08:57.441 "nvme_version": "1.3" 00:08:57.441 }, 00:08:57.442 "ns_data": { 00:08:57.442 "id": 1, 00:08:57.442 "can_share": true 00:08:57.442 } 00:08:57.442 } 00:08:57.442 ], 00:08:57.442 "mp_policy": "active_passive" 00:08:57.442 } 00:08:57.442 } 00:08:57.442 ] 00:08:57.442 21:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77844 00:08:57.442 21:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.442 21:49:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:57.699 Running I/O for 10 seconds... 00:08:58.633 Latency(us) 00:08:58.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.633 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:58.633 =================================================================================================================== 00:08:58.633 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:08:58.633 00:08:59.566 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:08:59.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.566 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:59.566 =================================================================================================================== 00:08:59.566 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:08:59.566 00:08:59.823 true 00:08:59.823 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:08:59.823 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.082 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.082 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.082 21:49:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77844 00:09:00.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.647 Nvme0n1 : 3.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:09:00.647 =================================================================================================================== 00:09:00.647 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:09:00.647 00:09:01.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.587 Nvme0n1 : 4.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:09:01.587 =================================================================================================================== 00:09:01.587 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:09:01.587 00:09:02.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.971 Nvme0n1 : 5.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:09:02.971 =================================================================================================================== 00:09:02.971 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:09:02.971 00:09:03.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.536 Nvme0n1 : 6.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:09:03.536 =================================================================================================================== 00:09:03.536 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:09:03.536 00:09:04.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.909 Nvme0n1 : 7.00 7565.57 29.55 0.00 0.00 0.00 0.00 0.00 00:09:04.909 =================================================================================================================== 00:09:04.909 Total : 7565.57 29.55 0.00 0.00 0.00 0.00 0.00 00:09:04.909 00:09:05.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.843 Nvme0n1 : 8.00 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:09:05.843 =================================================================================================================== 00:09:05.843 Total : 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:09:05.843 00:09:06.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.777 Nvme0n1 : 9.00 7521.22 29.38 0.00 0.00 0.00 0.00 0.00 00:09:06.777 =================================================================================================================== 00:09:06.777 Total : 7521.22 29.38 0.00 0.00 0.00 0.00 0.00 00:09:06.777 00:09:07.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.734 Nvme0n1 : 10.00 7505.70 29.32 0.00 0.00 0.00 0.00 0.00 00:09:07.734 =================================================================================================================== 00:09:07.735 Total : 7505.70 29.32 0.00 0.00 0.00 0.00 0.00 00:09:07.735 00:09:07.735 00:09:07.735 Latency(us) 00:09:07.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.735 Nvme0n1 : 10.01 7510.70 29.34 0.00 0.00 17035.84 15013.70 35985.22 00:09:07.735 =================================================================================================================== 00:09:07.735 Total : 7510.70 29.34 0.00 0.00 17035.84 15013.70 35985.22 00:09:07.735 0 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77821 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 77821 ']' 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 77821 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77821 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:07.735 killing process with pid 77821 00:09:07.735 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.735 00:09:07.735 Latency(us) 00:09:07.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.735 =================================================================================================================== 00:09:07.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77821' 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 77821 00:09:07.735 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 77821 00:09:07.993 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.250 21:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.508 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:08.508 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:08.766 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:08.766 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:08.766 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.024 [2024-07-24 21:49:14.556193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:09.024 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:09.282 request: 00:09:09.282 { 00:09:09.282 "uuid": "5c62f284-9d8a-44ef-9864-fbc170fd8995", 00:09:09.282 "method": "bdev_lvol_get_lvstores", 00:09:09.282 "req_id": 1 00:09:09.282 } 00:09:09.282 Got JSON-RPC error response 00:09:09.282 response: 00:09:09.282 { 00:09:09.282 "code": -19, 00:09:09.282 "message": "No such device" 00:09:09.282 } 00:09:09.282 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:09.282 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:09.282 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:09.282 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:09.282 21:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.540 aio_bdev 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2a1180e5-de21-4139-9e95-63d72eac6a63 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=2a1180e5-de21-4139-9e95-63d72eac6a63 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:09.540 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.799 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1180e5-de21-4139-9e95-63d72eac6a63 -t 2000 00:09:10.057 [ 00:09:10.057 { 00:09:10.057 "name": "2a1180e5-de21-4139-9e95-63d72eac6a63", 00:09:10.057 "aliases": [ 00:09:10.057 "lvs/lvol" 00:09:10.057 ], 00:09:10.057 "product_name": "Logical Volume", 00:09:10.057 "block_size": 4096, 00:09:10.057 "num_blocks": 38912, 00:09:10.057 "uuid": "2a1180e5-de21-4139-9e95-63d72eac6a63", 00:09:10.057 "assigned_rate_limits": { 00:09:10.057 "rw_ios_per_sec": 0, 00:09:10.057 "rw_mbytes_per_sec": 0, 00:09:10.057 "r_mbytes_per_sec": 0, 00:09:10.057 "w_mbytes_per_sec": 0 00:09:10.057 }, 00:09:10.057 "claimed": false, 00:09:10.057 "zoned": false, 00:09:10.057 "supported_io_types": { 00:09:10.057 "read": true, 00:09:10.057 "write": true, 00:09:10.057 "unmap": true, 00:09:10.057 "write_zeroes": true, 00:09:10.057 "flush": false, 00:09:10.057 "reset": true, 00:09:10.057 "compare": false, 00:09:10.057 "compare_and_write": false, 00:09:10.057 "abort": false, 00:09:10.057 "nvme_admin": false, 00:09:10.057 "nvme_io": false 00:09:10.057 }, 00:09:10.057 "driver_specific": { 00:09:10.057 "lvol": { 00:09:10.057 "lvol_store_uuid": "5c62f284-9d8a-44ef-9864-fbc170fd8995", 00:09:10.057 "base_bdev": "aio_bdev", 00:09:10.057 "thin_provision": false, 00:09:10.057 "num_allocated_clusters": 38, 00:09:10.057 "snapshot": false, 00:09:10.057 "clone": false, 00:09:10.057 "esnap_clone": false 00:09:10.057 } 00:09:10.057 } 00:09:10.057 } 00:09:10.057 ] 00:09:10.057 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:09:10.057 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:10.057 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:10.315 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:10.315 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:10.315 21:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:10.574 21:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:10.574 21:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2a1180e5-de21-4139-9e95-63d72eac6a63 00:09:10.832 21:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c62f284-9d8a-44ef-9864-fbc170fd8995 00:09:11.090 21:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.348 21:49:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.607 ************************************ 00:09:11.607 END TEST lvs_grow_clean 00:09:11.607 ************************************ 00:09:11.607 00:09:11.607 real 0m18.224s 00:09:11.607 user 0m17.166s 00:09:11.607 sys 0m2.542s 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.607 ************************************ 00:09:11.607 START TEST lvs_grow_dirty 00:09:11.607 ************************************ 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:11.607 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.173 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:12.173 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:12.173 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b95241c5-e74e-4519-972a-dcd40661bd51 00:09:12.173 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:12.173 21:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.431 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.431 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.431 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b95241c5-e74e-4519-972a-dcd40661bd51 lvol 150 00:09:12.688 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:12.688 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.688 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:12.946 [2024-07-24 21:49:18.633622] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:12.946 [2024-07-24 21:49:18.633699] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:12.946 true 00:09:12.946 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:12.946 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:13.512 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:13.512 21:49:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.512 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:13.771 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:14.030 [2024-07-24 21:49:19.646151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.030 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78091 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78091 /var/tmp/bdevperf.sock 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 78091 ']' 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:14.288 21:49:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.288 [2024-07-24 21:49:19.944923] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:14.288 [2024-07-24 21:49:19.945205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78091 ] 00:09:14.546 [2024-07-24 21:49:20.079300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.546 [2024-07-24 21:49:20.169719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.546 [2024-07-24 21:49:20.224519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.481 21:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:15.481 21:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:09:15.482 21:49:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.739 Nvme0n1 00:09:15.739 21:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.997 [ 00:09:15.997 { 00:09:15.997 "name": "Nvme0n1", 00:09:15.997 "aliases": [ 00:09:15.997 "3c8a55af-ae5a-4375-b082-18bd26832c31" 00:09:15.997 ], 00:09:15.997 "product_name": "NVMe disk", 00:09:15.997 "block_size": 4096, 00:09:15.997 "num_blocks": 38912, 00:09:15.997 "uuid": "3c8a55af-ae5a-4375-b082-18bd26832c31", 00:09:15.997 "assigned_rate_limits": { 00:09:15.997 "rw_ios_per_sec": 0, 00:09:15.997 "rw_mbytes_per_sec": 0, 00:09:15.997 "r_mbytes_per_sec": 0, 00:09:15.997 "w_mbytes_per_sec": 0 00:09:15.997 }, 00:09:15.997 "claimed": false, 00:09:15.997 "zoned": false, 00:09:15.997 "supported_io_types": { 00:09:15.997 "read": true, 00:09:15.997 "write": true, 00:09:15.997 "unmap": true, 00:09:15.997 "write_zeroes": true, 00:09:15.997 "flush": true, 00:09:15.997 "reset": true, 00:09:15.997 "compare": true, 00:09:15.997 "compare_and_write": true, 00:09:15.997 "abort": true, 00:09:15.998 "nvme_admin": true, 00:09:15.998 "nvme_io": true 00:09:15.998 }, 00:09:15.998 "memory_domains": [ 00:09:15.998 { 00:09:15.998 "dma_device_id": "system", 00:09:15.998 "dma_device_type": 1 00:09:15.998 } 00:09:15.998 ], 00:09:15.998 "driver_specific": { 00:09:15.998 "nvme": [ 00:09:15.998 { 00:09:15.998 "trid": { 00:09:15.998 "trtype": "TCP", 00:09:15.998 "adrfam": "IPv4", 00:09:15.998 "traddr": "10.0.0.2", 00:09:15.998 "trsvcid": "4420", 00:09:15.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.998 }, 00:09:15.998 "ctrlr_data": { 00:09:15.998 "cntlid": 1, 00:09:15.998 "vendor_id": "0x8086", 00:09:15.998 "model_number": "SPDK bdev Controller", 00:09:15.998 "serial_number": "SPDK0", 00:09:15.998 "firmware_revision": "24.05.1", 00:09:15.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.998 "oacs": { 00:09:15.998 "security": 0, 00:09:15.998 "format": 0, 00:09:15.998 "firmware": 0, 00:09:15.998 "ns_manage": 0 00:09:15.998 }, 00:09:15.998 "multi_ctrlr": true, 00:09:15.998 "ana_reporting": false 00:09:15.998 }, 00:09:15.998 "vs": { 00:09:15.998 "nvme_version": "1.3" 00:09:15.998 }, 00:09:15.998 "ns_data": { 00:09:15.998 "id": 1, 00:09:15.998 "can_share": true 00:09:15.998 } 00:09:15.998 } 00:09:15.998 ], 00:09:15.998 "mp_policy": "active_passive" 00:09:15.998 } 00:09:15.998 } 00:09:15.998 ] 00:09:15.998 21:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78114 00:09:15.998 21:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.998 21:49:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.998 Running I/O for 10 seconds... 00:09:16.931 Latency(us) 00:09:16.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.931 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:09:16.931 =================================================================================================================== 00:09:16.931 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:09:16.931 00:09:17.865 21:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:18.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.122 Nvme0n1 : 2.00 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:09:18.122 =================================================================================================================== 00:09:18.122 Total : 7810.50 30.51 0.00 0.00 0.00 0.00 0.00 00:09:18.122 00:09:18.122 true 00:09:18.122 21:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:18.122 21:49:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.689 21:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.689 21:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.689 21:49:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78114 00:09:18.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.960 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:09:18.960 =================================================================================================================== 00:09:18.960 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:09:18.960 00:09:19.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.926 Nvme0n1 : 4.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:09:19.926 =================================================================================================================== 00:09:19.926 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:09:19.926 00:09:21.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.300 Nvme0n1 : 5.00 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:09:21.300 =================================================================================================================== 00:09:21.300 Total : 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:09:21.300 00:09:22.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.234 Nvme0n1 : 6.00 7598.83 29.68 0.00 0.00 0.00 0.00 0.00 00:09:22.234 =================================================================================================================== 00:09:22.234 Total : 7598.83 29.68 0.00 0.00 0.00 0.00 0.00 00:09:22.234 00:09:23.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.169 Nvme0n1 : 7.00 7543.71 29.47 0.00 0.00 0.00 0.00 0.00 00:09:23.169 =================================================================================================================== 00:09:23.169 Total : 7543.71 29.47 0.00 0.00 0.00 0.00 0.00 00:09:23.169 00:09:24.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.104 Nvme0n1 : 8.00 7505.62 29.32 0.00 0.00 0.00 0.00 0.00 00:09:24.104 =================================================================================================================== 00:09:24.104 Total : 7505.62 29.32 0.00 0.00 0.00 0.00 0.00 00:09:24.104 00:09:25.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.037 Nvme0n1 : 9.00 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:09:25.038 =================================================================================================================== 00:09:25.038 Total : 7476.00 29.20 0.00 0.00 0.00 0.00 0.00 00:09:25.038 00:09:25.974 00:09:25.974 Latency(us) 00:09:25.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.974 Nvme0n1 : 10.00 7463.01 29.15 0.00 0.00 17146.39 12451.84 74353.57 00:09:25.974 =================================================================================================================== 00:09:25.974 Total : 7463.01 29.15 0.00 0.00 17146.39 12451.84 74353.57 00:09:25.974 0 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78091 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 78091 ']' 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 78091 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78091 00:09:25.974 killing process with pid 78091 00:09:25.974 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.974 00:09:25.974 Latency(us) 00:09:25.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.974 =================================================================================================================== 00:09:25.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78091' 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 78091 00:09:25.974 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 78091 00:09:26.233 21:49:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.491 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.749 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:26.749 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:27.007 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:27.007 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:27.007 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77733 00:09:27.007 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77733 00:09:27.265 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77733 Killed "${NVMF_APP[@]}" "$@" 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=78253 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 78253 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 78253 ']' 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:27.265 21:49:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.265 [2024-07-24 21:49:32.781764] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:27.265 [2024-07-24 21:49:32.781916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.265 [2024-07-24 21:49:32.919237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.524 [2024-07-24 21:49:33.010909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.524 [2024-07-24 21:49:33.011202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.524 [2024-07-24 21:49:33.011223] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.524 [2024-07-24 21:49:33.011232] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.524 [2024-07-24 21:49:33.011239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.524 [2024-07-24 21:49:33.011267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.524 [2024-07-24 21:49:33.066090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.089 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.347 [2024-07-24 21:49:33.939089] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:28.347 [2024-07-24 21:49:33.939731] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:28.347 [2024-07-24 21:49:33.939934] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:28.347 21:49:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.608 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c8a55af-ae5a-4375-b082-18bd26832c31 -t 2000 00:09:28.868 [ 00:09:28.868 { 00:09:28.868 "name": "3c8a55af-ae5a-4375-b082-18bd26832c31", 00:09:28.868 "aliases": [ 00:09:28.868 "lvs/lvol" 00:09:28.868 ], 00:09:28.868 "product_name": "Logical Volume", 00:09:28.868 "block_size": 4096, 00:09:28.868 "num_blocks": 38912, 00:09:28.868 "uuid": "3c8a55af-ae5a-4375-b082-18bd26832c31", 00:09:28.868 "assigned_rate_limits": { 00:09:28.868 "rw_ios_per_sec": 0, 00:09:28.868 "rw_mbytes_per_sec": 0, 00:09:28.868 "r_mbytes_per_sec": 0, 00:09:28.868 "w_mbytes_per_sec": 0 00:09:28.868 }, 00:09:28.868 "claimed": false, 00:09:28.868 "zoned": false, 00:09:28.868 "supported_io_types": { 00:09:28.868 "read": true, 00:09:28.868 "write": true, 00:09:28.868 "unmap": true, 00:09:28.868 "write_zeroes": true, 00:09:28.868 "flush": false, 00:09:28.868 "reset": true, 00:09:28.868 "compare": false, 00:09:28.868 "compare_and_write": false, 00:09:28.868 "abort": false, 00:09:28.868 "nvme_admin": false, 00:09:28.868 "nvme_io": false 00:09:28.868 }, 00:09:28.868 "driver_specific": { 00:09:28.868 "lvol": { 00:09:28.868 "lvol_store_uuid": "b95241c5-e74e-4519-972a-dcd40661bd51", 00:09:28.868 "base_bdev": "aio_bdev", 00:09:28.868 "thin_provision": false, 00:09:28.868 "num_allocated_clusters": 38, 00:09:28.868 "snapshot": false, 00:09:28.868 "clone": false, 00:09:28.868 "esnap_clone": false 00:09:28.868 } 00:09:28.868 } 00:09:28.868 } 00:09:28.868 ] 00:09:28.868 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:09:28.868 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:28.868 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:29.126 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:29.126 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:29.126 21:49:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:29.384 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:29.384 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.642 [2024-07-24 21:49:35.276730] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:29.642 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:29.900 request: 00:09:29.900 { 00:09:29.900 "uuid": "b95241c5-e74e-4519-972a-dcd40661bd51", 00:09:29.900 "method": "bdev_lvol_get_lvstores", 00:09:29.900 "req_id": 1 00:09:29.900 } 00:09:29.900 Got JSON-RPC error response 00:09:29.900 response: 00:09:29.900 { 00:09:29.901 "code": -19, 00:09:29.901 "message": "No such device" 00:09:29.901 } 00:09:29.901 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:29.901 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.901 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.901 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.901 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.159 aio_bdev 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:09:30.159 21:49:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.433 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c8a55af-ae5a-4375-b082-18bd26832c31 -t 2000 00:09:30.691 [ 00:09:30.691 { 00:09:30.691 "name": "3c8a55af-ae5a-4375-b082-18bd26832c31", 00:09:30.691 "aliases": [ 00:09:30.691 "lvs/lvol" 00:09:30.691 ], 00:09:30.691 "product_name": "Logical Volume", 00:09:30.691 "block_size": 4096, 00:09:30.691 "num_blocks": 38912, 00:09:30.691 "uuid": "3c8a55af-ae5a-4375-b082-18bd26832c31", 00:09:30.691 "assigned_rate_limits": { 00:09:30.691 "rw_ios_per_sec": 0, 00:09:30.691 "rw_mbytes_per_sec": 0, 00:09:30.691 "r_mbytes_per_sec": 0, 00:09:30.691 "w_mbytes_per_sec": 0 00:09:30.691 }, 00:09:30.691 "claimed": false, 00:09:30.691 "zoned": false, 00:09:30.691 "supported_io_types": { 00:09:30.691 "read": true, 00:09:30.691 "write": true, 00:09:30.691 "unmap": true, 00:09:30.691 "write_zeroes": true, 00:09:30.691 "flush": false, 00:09:30.691 "reset": true, 00:09:30.691 "compare": false, 00:09:30.691 "compare_and_write": false, 00:09:30.691 "abort": false, 00:09:30.691 "nvme_admin": false, 00:09:30.691 "nvme_io": false 00:09:30.691 }, 00:09:30.691 "driver_specific": { 00:09:30.691 "lvol": { 00:09:30.691 "lvol_store_uuid": "b95241c5-e74e-4519-972a-dcd40661bd51", 00:09:30.691 "base_bdev": "aio_bdev", 00:09:30.691 "thin_provision": false, 00:09:30.691 "num_allocated_clusters": 38, 00:09:30.691 "snapshot": false, 00:09:30.691 "clone": false, 00:09:30.691 "esnap_clone": false 00:09:30.691 } 00:09:30.691 } 00:09:30.691 } 00:09:30.691 ] 00:09:30.691 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:09:30.691 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:30.691 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:30.950 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:30.950 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:30.950 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:31.209 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:31.209 21:49:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3c8a55af-ae5a-4375-b082-18bd26832c31 00:09:31.467 21:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b95241c5-e74e-4519-972a-dcd40661bd51 00:09:31.726 21:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.985 21:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:32.244 00:09:32.244 real 0m20.644s 00:09:32.244 user 0m43.687s 00:09:32.244 sys 0m8.054s 00:09:32.244 21:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.244 ************************************ 00:09:32.244 END TEST lvs_grow_dirty 00:09:32.244 ************************************ 00:09:32.244 21:49:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:09:32.503 21:49:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:32.503 nvmf_trace.0 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.503 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.503 rmmod nvme_tcp 00:09:32.503 rmmod nvme_fabrics 00:09:32.503 rmmod nvme_keyring 00:09:32.761 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 78253 ']' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 78253 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 78253 ']' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 78253 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78253 00:09:32.762 killing process with pid 78253 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78253' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 78253 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 78253 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.762 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:33.020 00:09:33.020 real 0m41.364s 00:09:33.020 user 1m7.231s 00:09:33.020 sys 0m11.268s 00:09:33.020 ************************************ 00:09:33.020 END TEST nvmf_lvs_grow 00:09:33.020 ************************************ 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.020 21:49:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.020 21:49:38 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:33.020 21:49:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:33.020 21:49:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.020 21:49:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.020 ************************************ 00:09:33.020 START TEST nvmf_bdev_io_wait 00:09:33.020 ************************************ 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:33.020 * Looking for test storage... 00:09:33.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.020 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:33.021 Cannot find device "nvmf_tgt_br" 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.021 Cannot find device "nvmf_tgt_br2" 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:33.021 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:33.278 Cannot find device "nvmf_tgt_br" 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:33.278 Cannot find device "nvmf_tgt_br2" 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:33.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:09:33.278 00:09:33.278 --- 10.0.0.2 ping statistics --- 00:09:33.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.278 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:33.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:33.278 00:09:33.278 --- 10.0.0.3 ping statistics --- 00:09:33.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.278 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:33.278 00:09:33.278 --- 10.0.0.1 ping statistics --- 00:09:33.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.278 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.278 21:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=78565 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 78565 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 78565 ']' 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:33.536 21:49:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.536 [2024-07-24 21:49:39.070138] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:33.536 [2024-07-24 21:49:39.070462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.536 [2024-07-24 21:49:39.214193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.794 [2024-07-24 21:49:39.323396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.795 [2024-07-24 21:49:39.323792] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.795 [2024-07-24 21:49:39.323956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.795 [2024-07-24 21:49:39.324145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.795 [2024-07-24 21:49:39.324188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.795 [2024-07-24 21:49:39.324375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.795 [2024-07-24 21:49:39.324520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.795 [2024-07-24 21:49:39.325106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.795 [2024-07-24 21:49:39.325128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.361 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:34.361 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:09:34.361 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.361 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.361 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 [2024-07-24 21:49:40.182012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 [2024-07-24 21:49:40.198328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.619 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.619 Malloc0 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.620 [2024-07-24 21:49:40.260576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78605 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78607 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.620 { 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme$subsystem", 00:09:34.620 "trtype": "$TEST_TRANSPORT", 00:09:34.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "$NVMF_PORT", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.620 "hdgst": ${hdgst:-false}, 00:09:34.620 "ddgst": ${ddgst:-false} 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 } 00:09:34.620 EOF 00:09:34.620 )") 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.620 { 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme$subsystem", 00:09:34.620 "trtype": "$TEST_TRANSPORT", 00:09:34.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "$NVMF_PORT", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.620 "hdgst": ${hdgst:-false}, 00:09:34.620 "ddgst": ${ddgst:-false} 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 } 00:09:34.620 EOF 00:09:34.620 )") 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78609 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78613 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.620 { 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme$subsystem", 00:09:34.620 "trtype": "$TEST_TRANSPORT", 00:09:34.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "$NVMF_PORT", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.620 "hdgst": ${hdgst:-false}, 00:09:34.620 "ddgst": ${ddgst:-false} 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 } 00:09:34.620 EOF 00:09:34.620 )") 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.620 { 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme$subsystem", 00:09:34.620 "trtype": "$TEST_TRANSPORT", 00:09:34.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "$NVMF_PORT", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.620 "hdgst": ${hdgst:-false}, 00:09:34.620 "ddgst": ${ddgst:-false} 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 } 00:09:34.620 EOF 00:09:34.620 )") 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme1", 00:09:34.620 "trtype": "tcp", 00:09:34.620 "traddr": "10.0.0.2", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "4420", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.620 "hdgst": false, 00:09:34.620 "ddgst": false 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 }' 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme1", 00:09:34.620 "trtype": "tcp", 00:09:34.620 "traddr": "10.0.0.2", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "4420", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.620 "hdgst": false, 00:09:34.620 "ddgst": false 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 }' 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme1", 00:09:34.620 "trtype": "tcp", 00:09:34.620 "traddr": "10.0.0.2", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "4420", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.620 "hdgst": false, 00:09:34.620 "ddgst": false 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 }' 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:34.620 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.620 "params": { 00:09:34.620 "name": "Nvme1", 00:09:34.620 "trtype": "tcp", 00:09:34.620 "traddr": "10.0.0.2", 00:09:34.620 "adrfam": "ipv4", 00:09:34.620 "trsvcid": "4420", 00:09:34.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.620 "hdgst": false, 00:09:34.620 "ddgst": false 00:09:34.620 }, 00:09:34.620 "method": "bdev_nvme_attach_controller" 00:09:34.620 }' 00:09:34.620 [2024-07-24 21:49:40.322805] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:34.621 [2024-07-24 21:49:40.323600] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.621 [2024-07-24 21:49:40.331669] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:34.621 [2024-07-24 21:49:40.331863] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.878 [2024-07-24 21:49:40.334082] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:34.878 [2024-07-24 21:49:40.334181] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:34.878 [2024-07-24 21:49:40.344600] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:34.878 [2024-07-24 21:49:40.344919] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.878 21:49:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78605 00:09:34.878 [2024-07-24 21:49:40.539297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.136 [2024-07-24 21:49:40.610337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.136 [2024-07-24 21:49:40.615012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:35.136 [2024-07-24 21:49:40.677494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.136 [2024-07-24 21:49:40.685648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.136 [2024-07-24 21:49:40.686007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.136 [2024-07-24 21:49:40.734698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.136 [2024-07-24 21:49:40.755402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:35.136 [2024-07-24 21:49:40.755945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.136 Running I/O for 1 seconds... 00:09:35.136 [2024-07-24 21:49:40.801117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.136 [2024-07-24 21:49:40.827142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:35.136 Running I/O for 1 seconds... 00:09:35.393 [2024-07-24 21:49:40.874958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.393 Running I/O for 1 seconds... 00:09:35.393 Running I/O for 1 seconds... 00:09:36.324 00:09:36.324 Latency(us) 00:09:36.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.324 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:36.324 Nvme1n1 : 1.01 10981.70 42.90 0.00 0.00 11610.39 6702.55 19899.11 00:09:36.324 =================================================================================================================== 00:09:36.324 Total : 10981.70 42.90 0.00 0.00 11610.39 6702.55 19899.11 00:09:36.324 00:09:36.324 Latency(us) 00:09:36.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.324 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.324 Nvme1n1 : 1.01 7820.86 30.55 0.00 0.00 16278.74 8102.63 29193.31 00:09:36.324 =================================================================================================================== 00:09:36.324 Total : 7820.86 30.55 0.00 0.00 16278.74 8102.63 29193.31 00:09:36.324 00:09:36.324 Latency(us) 00:09:36.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.324 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.324 Nvme1n1 : 1.01 7679.22 30.00 0.00 0.00 16578.32 9413.35 26452.71 00:09:36.324 =================================================================================================================== 00:09:36.324 Total : 7679.22 30.00 0.00 0.00 16578.32 9413.35 26452.71 00:09:36.324 00:09:36.324 Latency(us) 00:09:36.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.324 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.324 Nvme1n1 : 1.00 161180.63 629.61 0.00 0.00 791.26 355.61 1869.27 00:09:36.324 =================================================================================================================== 00:09:36.324 Total : 161180.63 629.61 0.00 0.00 791.26 355.61 1869.27 00:09:36.324 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78607 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78609 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78613 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.582 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.582 rmmod nvme_tcp 00:09:36.582 rmmod nvme_fabrics 00:09:36.582 rmmod nvme_keyring 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 78565 ']' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 78565 ']' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78565' 00:09:36.840 killing process with pid 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 78565 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.840 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.099 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:37.099 00:09:37.099 real 0m3.992s 00:09:37.099 user 0m17.366s 00:09:37.099 sys 0m2.268s 00:09:37.099 ************************************ 00:09:37.099 END TEST nvmf_bdev_io_wait 00:09:37.099 ************************************ 00:09:37.099 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.099 21:49:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.099 21:49:42 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.099 21:49:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:37.099 21:49:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.099 21:49:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.099 ************************************ 00:09:37.099 START TEST nvmf_queue_depth 00:09:37.099 ************************************ 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.099 * Looking for test storage... 00:09:37.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:37.099 Cannot find device "nvmf_tgt_br" 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.099 Cannot find device "nvmf_tgt_br2" 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:37.099 Cannot find device "nvmf_tgt_br" 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.099 Cannot find device "nvmf_tgt_br2" 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:37.099 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:37.357 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:37.357 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.357 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:37.358 21:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:37.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:09:37.358 00:09:37.358 --- 10.0.0.2 ping statistics --- 00:09:37.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.358 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:37.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:37.358 00:09:37.358 --- 10.0.0.3 ping statistics --- 00:09:37.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.358 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:37.358 00:09:37.358 --- 10.0.0.1 ping statistics --- 00:09:37.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.358 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:37.358 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=78839 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 78839 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 78839 ']' 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:37.616 21:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:37.616 [2024-07-24 21:49:43.134411] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:37.616 [2024-07-24 21:49:43.134528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.616 [2024-07-24 21:49:43.278514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.872 [2024-07-24 21:49:43.373421] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.872 [2024-07-24 21:49:43.373477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.872 [2024-07-24 21:49:43.373512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.872 [2024-07-24 21:49:43.373524] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.872 [2024-07-24 21:49:43.373547] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.872 [2024-07-24 21:49:43.373584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.873 [2024-07-24 21:49:43.429348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.438 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.438 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:09:38.438 21:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.438 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.438 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 [2024-07-24 21:49:44.163305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 Malloc0 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 [2024-07-24 21:49:44.225971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78871 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78871 /var/tmp/bdevperf.sock 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 78871 ']' 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.698 21:49:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 [2024-07-24 21:49:44.308589] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:38.698 [2024-07-24 21:49:44.309040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78871 ] 00:09:38.957 [2024-07-24 21:49:44.460157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.957 [2024-07-24 21:49:44.558990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.957 [2024-07-24 21:49:44.614502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.891 NVMe0n1 00:09:39.891 21:49:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.892 21:49:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:39.892 Running I/O for 10 seconds... 00:09:52.094 00:09:52.094 Latency(us) 00:09:52.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.094 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.094 Verification LBA range: start 0x0 length 0x4000 00:09:52.094 NVMe0n1 : 10.09 8499.54 33.20 0.00 0.00 119835.51 27882.59 92465.34 00:09:52.095 =================================================================================================================== 00:09:52.095 Total : 8499.54 33.20 0.00 0.00 119835.51 27882.59 92465.34 00:09:52.095 0 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78871 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 78871 ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 78871 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78871 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78871' 00:09:52.095 killing process with pid 78871 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 78871 00:09:52.095 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.095 00:09:52.095 Latency(us) 00:09:52.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.095 =================================================================================================================== 00:09:52.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 78871 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.095 rmmod nvme_tcp 00:09:52.095 rmmod nvme_fabrics 00:09:52.095 rmmod nvme_keyring 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 78839 ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 78839 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 78839 ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 78839 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78839 00:09:52.095 killing process with pid 78839 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78839' 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 78839 00:09:52.095 21:49:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 78839 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:52.095 00:09:52.095 real 0m13.609s 00:09:52.095 user 0m23.800s 00:09:52.095 sys 0m2.097s 00:09:52.095 ************************************ 00:09:52.095 END TEST nvmf_queue_depth 00:09:52.095 ************************************ 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.095 21:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 21:49:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:52.095 21:49:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:52.095 21:49:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.095 21:49:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 ************************************ 00:09:52.095 START TEST nvmf_target_multipath 00:09:52.095 ************************************ 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:52.095 * Looking for test storage... 00:09:52.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.095 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:52.096 Cannot find device "nvmf_tgt_br" 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.096 Cannot find device "nvmf_tgt_br2" 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:52.096 Cannot find device "nvmf_tgt_br" 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:52.096 Cannot find device "nvmf_tgt_br2" 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:52.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:52.096 00:09:52.096 --- 10.0.0.2 ping statistics --- 00:09:52.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.096 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:52.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:52.096 00:09:52.096 --- 10.0.0.3 ping statistics --- 00:09:52.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.096 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:52.096 00:09:52.096 --- 10.0.0.1 ping statistics --- 00:09:52.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.096 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.096 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=79192 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 79192 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 79192 ']' 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:52.097 21:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.097 [2024-07-24 21:49:56.788637] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:52.097 [2024-07-24 21:49:56.788757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.097 [2024-07-24 21:49:56.926288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.097 [2024-07-24 21:49:57.011066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.097 [2024-07-24 21:49:57.011392] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.097 [2024-07-24 21:49:57.011529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.097 [2024-07-24 21:49:57.011669] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.097 [2024-07-24 21:49:57.011713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.097 [2024-07-24 21:49:57.011873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.097 [2024-07-24 21:49:57.011952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.097 [2024-07-24 21:49:57.012062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.097 [2024-07-24 21:49:57.012065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.097 [2024-07-24 21:49:57.065513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.097 21:49:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:52.355 [2024-07-24 21:49:57.981304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.355 21:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:52.613 Malloc0 00:09:52.613 21:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:52.871 21:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.129 21:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.387 [2024-07-24 21:49:58.917231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.387 21:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:53.645 [2024-07-24 21:49:59.133404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:53.645 21:49:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:53.645 21:49:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:53.903 21:49:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.903 21:49:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:09:53.903 21:49:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.903 21:49:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:53.903 21:49:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:55.883 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=79282 00:09:55.884 21:50:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:55.884 [global] 00:09:55.884 thread=1 00:09:55.884 invalidate=1 00:09:55.884 rw=randrw 00:09:55.884 time_based=1 00:09:55.884 runtime=6 00:09:55.884 ioengine=libaio 00:09:55.884 direct=1 00:09:55.884 bs=4096 00:09:55.884 iodepth=128 00:09:55.884 norandommap=0 00:09:55.884 numjobs=1 00:09:55.884 00:09:55.884 verify_dump=1 00:09:55.884 verify_backlog=512 00:09:55.884 verify_state_save=0 00:09:55.884 do_verify=1 00:09:55.884 verify=crc32c-intel 00:09:55.884 [job0] 00:09:55.884 filename=/dev/nvme0n1 00:09:55.884 Could not set queue depth (nvme0n1) 00:09:56.141 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:56.141 fio-3.35 00:09:56.141 Starting 1 thread 00:09:57.072 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:57.072 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:57.330 21:50:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:57.587 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:57.845 21:50:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 79282 00:10:03.102 00:10:03.102 job0: (groupid=0, jobs=1): err= 0: pid=79303: Wed Jul 24 21:50:07 2024 00:10:03.102 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6002msec) 00:10:03.102 slat (usec): min=2, max=5993, avg=55.11, stdev=217.48 00:10:03.102 clat (usec): min=1575, max=16383, avg=8167.65, stdev=1359.72 00:10:03.102 lat (usec): min=1599, max=16399, avg=8222.75, stdev=1364.40 00:10:03.102 clat percentiles (usec): 00:10:03.102 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7504], 00:10:03.102 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8225], 00:10:03.102 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[11338], 00:10:03.102 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13435], 99.95th=[13698], 00:10:03.102 | 99.99th=[13960] 00:10:03.102 bw ( KiB/s): min= 2048, max=27920, per=52.86%, avg=22350.55, stdev=8011.22, samples=11 00:10:03.102 iops : min= 512, max= 6980, avg=5587.64, stdev=2002.81, samples=11 00:10:03.103 write: IOPS=6501, BW=25.4MiB/s (26.6MB/s)(133MiB/5239msec); 0 zone resets 00:10:03.103 slat (usec): min=4, max=2458, avg=63.83, stdev=153.95 00:10:03.103 clat (usec): min=1121, max=13781, avg=7132.64, stdev=1205.13 00:10:03.103 lat (usec): min=1387, max=13805, avg=7196.47, stdev=1209.33 00:10:03.103 clat percentiles (usec): 00:10:03.103 | 1.00th=[ 3326], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6652], 00:10:03.103 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:10:03.103 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8455], 00:10:03.103 | 99.00th=[10945], 99.50th=[11338], 99.90th=[12518], 99.95th=[12911], 00:10:03.103 | 99.99th=[13304] 00:10:03.103 bw ( KiB/s): min= 2120, max=28240, per=86.05%, avg=22378.91, stdev=7895.62, samples=11 00:10:03.103 iops : min= 530, max= 7060, avg=5594.73, stdev=1973.90, samples=11 00:10:03.103 lat (msec) : 2=0.03%, 4=1.52%, 10=93.48%, 20=4.97% 00:10:03.103 cpu : usr=5.52%, sys=21.86%, ctx=5713, majf=0, minf=133 00:10:03.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:03.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.103 issued rwts: total=63447,34061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.103 00:10:03.103 Run status group 0 (all jobs): 00:10:03.103 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6002-6002msec 00:10:03.103 WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=133MiB (140MB), run=5239-5239msec 00:10:03.103 00:10:03.103 Disk stats (read/write): 00:10:03.103 nvme0n1: ios=62435/33529, merge=0/0, ticks=489737/224513, in_queue=714250, util=98.71% 00:10:03.103 21:50:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79383 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:03.103 21:50:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:03.103 [global] 00:10:03.103 thread=1 00:10:03.103 invalidate=1 00:10:03.103 rw=randrw 00:10:03.103 time_based=1 00:10:03.103 runtime=6 00:10:03.103 ioengine=libaio 00:10:03.103 direct=1 00:10:03.103 bs=4096 00:10:03.103 iodepth=128 00:10:03.103 norandommap=0 00:10:03.103 numjobs=1 00:10:03.103 00:10:03.103 verify_dump=1 00:10:03.103 verify_backlog=512 00:10:03.103 verify_state_save=0 00:10:03.103 do_verify=1 00:10:03.103 verify=crc32c-intel 00:10:03.103 [job0] 00:10:03.103 filename=/dev/nvme0n1 00:10:03.103 Could not set queue depth (nvme0n1) 00:10:03.103 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.103 fio-3.35 00:10:03.103 Starting 1 thread 00:10:03.668 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:03.925 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.489 21:50:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:04.489 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.746 21:50:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79383 00:10:08.959 00:10:08.959 job0: (groupid=0, jobs=1): err= 0: pid=79410: Wed Jul 24 21:50:14 2024 00:10:08.959 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(270MiB/6002msec) 00:10:08.959 slat (usec): min=6, max=5741, avg=43.89, stdev=182.94 00:10:08.959 clat (usec): min=270, max=18221, avg=7547.95, stdev=2116.57 00:10:08.959 lat (usec): min=290, max=18231, avg=7591.84, stdev=2128.12 00:10:08.959 clat percentiles (usec): 00:10:08.959 | 1.00th=[ 1352], 5.00th=[ 3818], 10.00th=[ 4686], 20.00th=[ 6194], 00:10:08.959 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8029], 00:10:08.959 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11469], 00:10:08.959 | 99.00th=[12780], 99.50th=[13304], 99.90th=[16712], 99.95th=[16909], 00:10:08.959 | 99.99th=[17957] 00:10:08.959 bw ( KiB/s): min= 8216, max=39056, per=53.81%, avg=24811.55, stdev=9315.76, samples=11 00:10:08.959 iops : min= 2054, max= 9764, avg=6202.82, stdev=2328.85, samples=11 00:10:08.959 write: IOPS=6974, BW=27.2MiB/s (28.6MB/s)(144MiB/5288msec); 0 zone resets 00:10:08.959 slat (usec): min=14, max=1516, avg=54.19, stdev=129.70 00:10:08.959 clat (usec): min=219, max=15679, avg=6502.87, stdev=1980.32 00:10:08.959 lat (usec): min=250, max=15705, avg=6557.06, stdev=1990.83 00:10:08.959 clat percentiles (usec): 00:10:08.959 | 1.00th=[ 1004], 5.00th=[ 2769], 10.00th=[ 3654], 20.00th=[ 4621], 00:10:08.959 | 30.00th=[ 6194], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7373], 00:10:08.959 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 8586], 00:10:08.959 | 99.00th=[10945], 99.50th=[11731], 99.90th=[14877], 99.95th=[15270], 00:10:08.959 | 99.99th=[15533] 00:10:08.959 bw ( KiB/s): min= 8624, max=38200, per=89.09%, avg=24855.82, stdev=9160.87, samples=11 00:10:08.959 iops : min= 2156, max= 9550, avg=6213.91, stdev=2290.15, samples=11 00:10:08.959 lat (usec) : 250=0.01%, 500=0.05%, 750=0.16%, 1000=0.43% 00:10:08.959 lat (msec) : 2=2.05%, 4=5.92%, 10=85.62%, 20=5.77% 00:10:08.959 cpu : usr=5.78%, sys=23.96%, ctx=6591, majf=0, minf=133 00:10:08.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:08.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.959 issued rwts: total=69181,36883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.959 00:10:08.959 Run status group 0 (all jobs): 00:10:08.959 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=270MiB (283MB), run=6002-6002msec 00:10:08.959 WRITE: bw=27.2MiB/s (28.6MB/s), 27.2MiB/s-27.2MiB/s (28.6MB/s-28.6MB/s), io=144MiB (151MB), run=5288-5288msec 00:10:08.959 00:10:08.959 Disk stats (read/write): 00:10:08.959 nvme0n1: ios=68171/36359, merge=0/0, ticks=492657/221102, in_queue=713759, util=98.66% 00:10:08.959 21:50:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:10:09.216 21:50:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.474 rmmod nvme_tcp 00:10:09.474 rmmod nvme_fabrics 00:10:09.474 rmmod nvme_keyring 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 79192 ']' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 79192 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 79192 ']' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 79192 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79192 00:10:09.474 killing process with pid 79192 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79192' 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 79192 00:10:09.474 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 79192 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.732 00:10:09.732 real 0m19.104s 00:10:09.732 user 1m11.647s 00:10:09.732 sys 0m9.538s 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:09.732 21:50:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.732 ************************************ 00:10:09.732 END TEST nvmf_target_multipath 00:10:09.732 ************************************ 00:10:09.732 21:50:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.732 21:50:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:09.732 21:50:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.732 21:50:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.732 ************************************ 00:10:09.732 START TEST nvmf_zcopy 00:10:09.732 ************************************ 00:10:09.732 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.991 * Looking for test storage... 00:10:09.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:09.991 Cannot find device "nvmf_tgt_br" 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.991 Cannot find device "nvmf_tgt_br2" 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:09.991 Cannot find device "nvmf_tgt_br" 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:09.991 Cannot find device "nvmf_tgt_br2" 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:09.991 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:09.992 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:10.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:10.250 00:10:10.250 --- 10.0.0.2 ping statistics --- 00:10:10.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.250 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:10.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:10:10.250 00:10:10.250 --- 10.0.0.3 ping statistics --- 00:10:10.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.250 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:10.250 00:10:10.250 --- 10.0.0.1 ping statistics --- 00:10:10.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.250 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=79647 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 79647 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 79647 ']' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:10.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:10.250 21:50:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.250 [2024-07-24 21:50:15.960061] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:10.250 [2024-07-24 21:50:15.960158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.508 [2024-07-24 21:50:16.100088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.508 [2024-07-24 21:50:16.196836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.508 [2024-07-24 21:50:16.196887] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.508 [2024-07-24 21:50:16.196898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.508 [2024-07-24 21:50:16.196907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.508 [2024-07-24 21:50:16.196914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.508 [2024-07-24 21:50:16.196940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.766 [2024-07-24 21:50:16.251465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.360 21:50:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.360 21:50:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:10:11.360 21:50:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.360 21:50:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.360 21:50:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 [2024-07-24 21:50:17.045248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.360 [2024-07-24 21:50:17.061339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.360 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.625 malloc0 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:11.625 { 00:10:11.625 "params": { 00:10:11.625 "name": "Nvme$subsystem", 00:10:11.625 "trtype": "$TEST_TRANSPORT", 00:10:11.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.625 "adrfam": "ipv4", 00:10:11.625 "trsvcid": "$NVMF_PORT", 00:10:11.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.625 "hdgst": ${hdgst:-false}, 00:10:11.625 "ddgst": ${ddgst:-false} 00:10:11.625 }, 00:10:11.625 "method": "bdev_nvme_attach_controller" 00:10:11.625 } 00:10:11.625 EOF 00:10:11.625 )") 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:11.625 21:50:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:11.625 "params": { 00:10:11.625 "name": "Nvme1", 00:10:11.625 "trtype": "tcp", 00:10:11.625 "traddr": "10.0.0.2", 00:10:11.625 "adrfam": "ipv4", 00:10:11.625 "trsvcid": "4420", 00:10:11.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.625 "hdgst": false, 00:10:11.625 "ddgst": false 00:10:11.625 }, 00:10:11.625 "method": "bdev_nvme_attach_controller" 00:10:11.625 }' 00:10:11.625 [2024-07-24 21:50:17.147060] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:11.625 [2024-07-24 21:50:17.147143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79688 ] 00:10:11.626 [2024-07-24 21:50:17.281555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.884 [2024-07-24 21:50:17.382816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.884 [2024-07-24 21:50:17.447885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.884 Running I/O for 10 seconds... 00:10:24.083 00:10:24.083 Latency(us) 00:10:24.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:24.083 Verification LBA range: start 0x0 length 0x1000 00:10:24.083 Nvme1n1 : 10.01 5924.68 46.29 0.00 0.00 21536.79 1750.11 32887.16 00:10:24.083 =================================================================================================================== 00:10:24.083 Total : 5924.68 46.29 0.00 0.00 21536.79 1750.11 32887.16 00:10:24.083 21:50:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79800 00:10:24.083 21:50:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:24.084 { 00:10:24.084 "params": { 00:10:24.084 "name": "Nvme$subsystem", 00:10:24.084 "trtype": "$TEST_TRANSPORT", 00:10:24.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.084 "adrfam": "ipv4", 00:10:24.084 "trsvcid": "$NVMF_PORT", 00:10:24.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.084 "hdgst": ${hdgst:-false}, 00:10:24.084 "ddgst": ${ddgst:-false} 00:10:24.084 }, 00:10:24.084 "method": "bdev_nvme_attach_controller" 00:10:24.084 } 00:10:24.084 EOF 00:10:24.084 )") 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:24.084 [2024-07-24 21:50:27.794094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.794139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:24.084 21:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:24.084 "params": { 00:10:24.084 "name": "Nvme1", 00:10:24.084 "trtype": "tcp", 00:10:24.084 "traddr": "10.0.0.2", 00:10:24.084 "adrfam": "ipv4", 00:10:24.084 "trsvcid": "4420", 00:10:24.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.084 "hdgst": false, 00:10:24.084 "ddgst": false 00:10:24.084 }, 00:10:24.084 "method": "bdev_nvme_attach_controller" 00:10:24.084 }' 00:10:24.084 [2024-07-24 21:50:27.810060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.810088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.818047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.818070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.826049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.826073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.834062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.834093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.843255] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:24.084 [2024-07-24 21:50:27.843338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79800 ] 00:10:24.084 [2024-07-24 21:50:27.846071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.846099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.854059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.854084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.866065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.866089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.878069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.878103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.890079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.890108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.902071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.902096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.914085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.914115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.926080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.926107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.938083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.938109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.950086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.950110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.962092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.962120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.974123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.974161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.986112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:27.986143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:27.987188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.084 [2024-07-24 21:50:28.002125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.002163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.014115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.014146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.026110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.026140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.038114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.038143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.046136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.046179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.058157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.058209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.066126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.066159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.074121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.074150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.086125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.086157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.091381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.084 [2024-07-24 21:50:28.094126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.094154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.102128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.102156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.110144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.110177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.118140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.118172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.126146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.126179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.134147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.134179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.142152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.142184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.150152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.150184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.152848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.084 [2024-07-24 21:50:28.158150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.158178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.166153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.166183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.174154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.174184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.182149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.182176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.084 [2024-07-24 21:50:28.190151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.084 [2024-07-24 21:50:28.190179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.198181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.198222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.206177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.206214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.214178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.214210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.222186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.222221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.230194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.230226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.238202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.238235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.246208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.246242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.254237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.254276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.262248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.262294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 Running I/O for 5 seconds... 00:10:24.085 [2024-07-24 21:50:28.270246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.270282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.282899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.282939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.292760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.292796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.304101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.304139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.315151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.315194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.327791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.327828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.337683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.337721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.352710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.352758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.369120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.369185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.386064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.386132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.404357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.404423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.419035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.419091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.434936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.434996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.452035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.452090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.468092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.468149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.477345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.477395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.492976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.493047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.502989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.503042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.517764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.517822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.534996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.535053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.551994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.552056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.568192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.568250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.585190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.585251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.595302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.595355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.607371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.607439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.622084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.622142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.631806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.631863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.647153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.647215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.664671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.664728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.681216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.681267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.698216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.698286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.714453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.714506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.724582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.724655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.739476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.739537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.749904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.749945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.764741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.764804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.782185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.782234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.798846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.798894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.817177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.817221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.833080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.833134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.849498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.849558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.866051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.866097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.883438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.883486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.893581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.085 [2024-07-24 21:50:28.893643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.085 [2024-07-24 21:50:28.905183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.905236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.915885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.915927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.928183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.928225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.937819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.937856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.948817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.948860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.959989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.960040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.970914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.970952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:28.988606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:28.988655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.004349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.004403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.013826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.013864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.028219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.028266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.044668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.044731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.054552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.054598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.067638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.067683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.078030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.078076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.092728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.092782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.108470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.108524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.118046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.118093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.129558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.129623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.144848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.144904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.160553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.160620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.178425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.178488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.193602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.193666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.203498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.203549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.219317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.219371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.235836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.235890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.253718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.253774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.268738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.268808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.279051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.279110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.290905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.290966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.306812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.306873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.324379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.324439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.339929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.339986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.349308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.349358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.362625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.362679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.377289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.377344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.387116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.387156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.403047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.403086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.419227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.419269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.430118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.430165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.444624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.444666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.454401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.454439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.465834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.465885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.482498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.482546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.499811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.499854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.514214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.514254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.523651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.523692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.537957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.537998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.555624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.555668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.571912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.571956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.591381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.591425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.606480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.606530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.622104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.622155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.631163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.631206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.086 [2024-07-24 21:50:29.647635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.086 [2024-07-24 21:50:29.647674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.665044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.665085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.680855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.680898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.698891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.698934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.713506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.713551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.729289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.729335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.746262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.746310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.764124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.764185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.774818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.774864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.087 [2024-07-24 21:50:29.786037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.087 [2024-07-24 21:50:29.786077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.799069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.799110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.817494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.817545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.832401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.832453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.842882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.842930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.857518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.857569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.874633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.874681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.884112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.884151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.898702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.898739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.916251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.916295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.934099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.934142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.944785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.944823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.955955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.955993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.966874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.966922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.345 [2024-07-24 21:50:29.985421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.345 [2024-07-24 21:50:29.985502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.000384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.000428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.009954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.009994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.021797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.021839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.032891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.032931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.043552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.043594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.346 [2024-07-24 21:50:30.054165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.346 [2024-07-24 21:50:30.054205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.064798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.064846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.079406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.079456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.089378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.089421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.104214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.104258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.114267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.114309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.125824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.125866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.141172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.141221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.150605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.150661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.166503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.166552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.183491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.183540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.193487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.193532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.208183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.208232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.218711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.218752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.229465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.229511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.242000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.242048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.259418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.259472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.269850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.269892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.280656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.280696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.293258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.293305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.311824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.311870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.326184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.326249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.335678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.335718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.658 [2024-07-24 21:50:30.350392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.658 [2024-07-24 21:50:30.350435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.366641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.366682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.376663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.376702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.391426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.391467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.407852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.407890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.425638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.425675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.435571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.917 [2024-07-24 21:50:30.435619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.917 [2024-07-24 21:50:30.446656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.446692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.464865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.464902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.475063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.475100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.489649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.489685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.506923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.506971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.522914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.522951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.541999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.542036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.552185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.552221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.562961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.563000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.575873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.575910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.593753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.593791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.608352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.608399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.618169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.618205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.918 [2024-07-24 21:50:30.632407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.918 [2024-07-24 21:50:30.632444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.650225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.650263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.660152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.660190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.670675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.670711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.681580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.681631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.692262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.692299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.710053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.710090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.726549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.726589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.736127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.736163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.747802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.747837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.758506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.758543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.773596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.773647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.790811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.790847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.807532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.807572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.824122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.824185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.840770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.840826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.858896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.858947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.874315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.874375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.176 [2024-07-24 21:50:30.892563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.176 [2024-07-24 21:50:30.892639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.907655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.907717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.923780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.923841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.933405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.933457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.948642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.948702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.965867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.965928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.983902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.983959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:30.994547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:30.994592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.008929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.009003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.026383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.026442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.036769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.036820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.051299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.051358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.060967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.061019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.072237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.072289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.083344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.083397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.094665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.094716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.110929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.110988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.126803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.126854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.435 [2024-07-24 21:50:31.144374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.435 [2024-07-24 21:50:31.144444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.155048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.155100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.169955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.170004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.180285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.180337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.195135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.195196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.212540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.212576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.222659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.222694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.234014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.234048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.245207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.245241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.261353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.261394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.277771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.277810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.287824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.287869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.299577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.299626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.313934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.313970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.330141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.330179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.339922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.339958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.355456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.355490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.366059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.366106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.380751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.380796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.390788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.390821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.694 [2024-07-24 21:50:31.402298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.694 [2024-07-24 21:50:31.402332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.413206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.413239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.424383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.424415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.441862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.441908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.451679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.451740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.466672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.466705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.476956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.476999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.492181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.492225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.501950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.501989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.514100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.514146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.525046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.525079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.537441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.537476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.547713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.547755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.559611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.559675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.574835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.574884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.589889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.589923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.599858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.599894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.614619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.614664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.630368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.630414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.639638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.639674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.953 [2024-07-24 21:50:31.655332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.953 [2024-07-24 21:50:31.655375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.673048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.673096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.689895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.689948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.700414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.700468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.711871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.711904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.726968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.727015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.736604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.736655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.752159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.752204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.768884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.768928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.785182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.785227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.802635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.802685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.817626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.817671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.827016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.827049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.838670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.838705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.849375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.849407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.859995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.860024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.870914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.870946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.883142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.883174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.893018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.893049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.904465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.904496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.212 [2024-07-24 21:50:31.916521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.212 [2024-07-24 21:50:31.916554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.934091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.934128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.949097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.949141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.958649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.958680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.970778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.970812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.986035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.986073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:31.996339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:31.996372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:32.008059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:32.008092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.471 [2024-07-24 21:50:32.023776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.471 [2024-07-24 21:50:32.023808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.041515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.041548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.051624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.051654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.066515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.066547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.083436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.083470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.092551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.092583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.103798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.103830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.114772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.114803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.129372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.129406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.139511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.139543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.154814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.154846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.164509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.164541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.472 [2024-07-24 21:50:32.180783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.472 [2024-07-24 21:50:32.180816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.197307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.197339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.216287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.216321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.226258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.226291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.236872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.236904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.248495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.248526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.263053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.263086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.272732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.272763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.287130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.287164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.297248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.297284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.311534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.311566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.321473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.321505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.336623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.336654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.352818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.352854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.369202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.369238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.386221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.386267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.401951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.401986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.411854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.411886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.423247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.423280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.433836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.433867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.731 [2024-07-24 21:50:32.444738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.731 [2024-07-24 21:50:32.444769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.989 [2024-07-24 21:50:32.457056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.989 [2024-07-24 21:50:32.457087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.989 [2024-07-24 21:50:32.466809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.989 [2024-07-24 21:50:32.466842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.989 [2024-07-24 21:50:32.477890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.477921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.488426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.488458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.499094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.499127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.515390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.515423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.524606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.524649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.541033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.541065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.558736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.558777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.568932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.568981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.583665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.583697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.593700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.593732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.608997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.609029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.619171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.619203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.630442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.630474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.641221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.641256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.655749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.655781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.665358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.665391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.679403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.679436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.688878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.688909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.990 [2024-07-24 21:50:32.704995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.990 [2024-07-24 21:50:32.705027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.715202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.715235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.730725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.730756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.741589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.741634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.756109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.756142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.765810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.765848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.777185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.777216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.791426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.791473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.800869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.800900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.812365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.812406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.826967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.826999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.836983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.837024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.852377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.852411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.869998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.870045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.886085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.886121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.904439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.904471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.914972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.915004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.929259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.248 [2024-07-24 21:50:32.929301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.248 [2024-07-24 21:50:32.938984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.249 [2024-07-24 21:50:32.939017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.249 [2024-07-24 21:50:32.954120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.249 [2024-07-24 21:50:32.954164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:32.969140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:32.969178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:32.978535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:32.978574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:32.994351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:32.994388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.010201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.010244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.029426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.029471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.043993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.044025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.053975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.054012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.065623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.065660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.076254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.076285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.087346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.087382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.099551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.099583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.109523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.109564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.120913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.120946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.131651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.131688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.142578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.142623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.155379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.155412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.165307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.165340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.176427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.176459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.188711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.188742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.198052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.198084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.210625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.210656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.507 [2024-07-24 21:50:33.221081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.507 [2024-07-24 21:50:33.221114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.231688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.231719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.244405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.244437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.254315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.254347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.265539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.265572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.275083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.275114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 00:10:27.766 Latency(us) 00:10:27.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.766 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:27.766 Nvme1n1 : 5.01 11636.35 90.91 0.00 0.00 10986.57 4885.41 21924.77 00:10:27.766 =================================================================================================================== 00:10:27.766 Total : 11636.35 90.91 0.00 0.00 10986.57 4885.41 21924.77 00:10:27.766 [2024-07-24 21:50:33.281582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.281624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.289580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.289619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.297586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.297626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.305593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.305633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.313592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.313631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.321596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.321637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.329597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.329639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.337600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.337639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.345602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.345642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.353601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.353641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.361608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.361651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.369626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.369662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.377629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.377664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.385628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.385659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.393629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.393658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.401606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.401641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.409604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.409638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.417632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.417662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.425637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.425667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.433625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.433649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.441629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.441653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.449646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.449677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.457641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.457669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.465629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.465651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 [2024-07-24 21:50:33.473635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.766 [2024-07-24 21:50:33.473658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79800) - No such process 00:10:27.766 21:50:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79800 00:10:27.766 21:50:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.766 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.766 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.024 delay0 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.024 21:50:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:28.024 [2024-07-24 21:50:33.670403] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:34.620 Initializing NVMe Controllers 00:10:34.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:34.620 Initialization complete. Launching workers. 00:10:34.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 176 00:10:34.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 463, failed to submit 33 00:10:34.620 success 366, unsuccess 97, failed 0 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.620 rmmod nvme_tcp 00:10:34.620 rmmod nvme_fabrics 00:10:34.620 rmmod nvme_keyring 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 79647 ']' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 79647 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 79647 ']' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 79647 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79647 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:34.620 killing process with pid 79647 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79647' 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 79647 00:10:34.620 21:50:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 79647 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.620 21:50:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:34.620 00:10:34.620 real 0m24.668s 00:10:34.620 user 0m40.356s 00:10:34.620 sys 0m6.822s 00:10:34.621 21:50:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:34.621 21:50:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.621 ************************************ 00:10:34.621 END TEST nvmf_zcopy 00:10:34.621 ************************************ 00:10:34.621 21:50:40 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:34.621 21:50:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:34.621 21:50:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:34.621 21:50:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.621 ************************************ 00:10:34.621 START TEST nvmf_nmic 00:10:34.621 ************************************ 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:34.621 * Looking for test storage... 00:10:34.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:34.621 Cannot find device "nvmf_tgt_br" 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.621 Cannot find device "nvmf_tgt_br2" 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:34.621 Cannot find device "nvmf_tgt_br" 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:34.621 Cannot find device "nvmf_tgt_br2" 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:34.621 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:34.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:34.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:34.878 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:34.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:34.879 00:10:34.879 --- 10.0.0.2 ping statistics --- 00:10:34.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.879 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:34.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:34.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:34.879 00:10:34.879 --- 10.0.0.3 ping statistics --- 00:10:34.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.879 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:34.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:34.879 00:10:34.879 --- 10.0.0.1 ping statistics --- 00:10:34.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.879 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.879 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=80128 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 80128 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 80128 ']' 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:35.135 21:50:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.135 [2024-07-24 21:50:40.660989] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:35.136 [2024-07-24 21:50:40.661080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.136 [2024-07-24 21:50:40.801258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.393 [2024-07-24 21:50:40.905987] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.393 [2024-07-24 21:50:40.906057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.393 [2024-07-24 21:50:40.906072] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.393 [2024-07-24 21:50:40.906082] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.393 [2024-07-24 21:50:40.906091] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.393 [2024-07-24 21:50:40.906276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.393 [2024-07-24 21:50:40.906525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.393 [2024-07-24 21:50:40.907035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.393 [2024-07-24 21:50:40.907088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.393 [2024-07-24 21:50:40.965950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.959 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 [2024-07-24 21:50:41.671546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 Malloc0 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 [2024-07-24 21:50:41.740666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 test case1: single bdev can't be used in multiple subsystems 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 [2024-07-24 21:50:41.776493] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:36.217 [2024-07-24 21:50:41.776534] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:36.217 [2024-07-24 21:50:41.776547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.217 request: 00:10:36.217 { 00:10:36.217 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:36.217 "namespace": { 00:10:36.217 "bdev_name": "Malloc0", 00:10:36.217 "no_auto_visible": false 00:10:36.217 }, 00:10:36.217 "method": "nvmf_subsystem_add_ns", 00:10:36.217 "req_id": 1 00:10:36.217 } 00:10:36.217 Got JSON-RPC error response 00:10:36.217 response: 00:10:36.217 { 00:10:36.217 "code": -32602, 00:10:36.217 "message": "Invalid parameters" 00:10:36.217 } 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:36.217 Adding namespace failed - expected result. 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:36.217 test case2: host connect to nvmf target in multiple paths 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.217 [2024-07-24 21:50:41.792652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.217 21:50:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:36.475 21:50:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.475 21:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:10:36.475 21:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.475 21:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:36.475 21:50:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:10:38.398 21:50:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:38.398 [global] 00:10:38.398 thread=1 00:10:38.398 invalidate=1 00:10:38.398 rw=write 00:10:38.398 time_based=1 00:10:38.398 runtime=1 00:10:38.398 ioengine=libaio 00:10:38.398 direct=1 00:10:38.398 bs=4096 00:10:38.398 iodepth=1 00:10:38.398 norandommap=0 00:10:38.398 numjobs=1 00:10:38.398 00:10:38.398 verify_dump=1 00:10:38.398 verify_backlog=512 00:10:38.398 verify_state_save=0 00:10:38.398 do_verify=1 00:10:38.398 verify=crc32c-intel 00:10:38.398 [job0] 00:10:38.398 filename=/dev/nvme0n1 00:10:38.654 Could not set queue depth (nvme0n1) 00:10:38.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.654 fio-3.35 00:10:38.654 Starting 1 thread 00:10:40.026 00:10:40.026 job0: (groupid=0, jobs=1): err= 0: pid=80220: Wed Jul 24 21:50:45 2024 00:10:40.026 read: IOPS=2960, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:10:40.026 slat (nsec): min=12254, max=34543, avg=13742.06, stdev=1576.31 00:10:40.026 clat (usec): min=146, max=544, avg=182.47, stdev=19.56 00:10:40.026 lat (usec): min=159, max=565, avg=196.22, stdev=19.77 00:10:40.026 clat percentiles (usec): 00:10:40.026 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:40.026 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:40.026 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:10:40.026 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 355], 99.95th=[ 545], 00:10:40.026 | 99.99th=[ 545] 00:10:40.026 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:40.026 slat (nsec): min=15873, max=93553, avg=20090.04, stdev=3499.97 00:10:40.026 clat (usec): min=88, max=241, avg=112.96, stdev=15.82 00:10:40.026 lat (usec): min=107, max=334, avg=133.05, stdev=16.75 00:10:40.026 clat percentiles (usec): 00:10:40.026 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 101], 00:10:40.026 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 113], 00:10:40.026 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 135], 95.00th=[ 145], 00:10:40.026 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 208], 00:10:40.026 | 99.99th=[ 241] 00:10:40.026 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:40.026 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:40.026 lat (usec) : 100=8.82%, 250=90.82%, 500=0.33%, 750=0.03% 00:10:40.026 cpu : usr=2.20%, sys=7.80%, ctx=6035, majf=0, minf=2 00:10:40.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.026 issued rwts: total=2963,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.026 00:10:40.026 Run status group 0 (all jobs): 00:10:40.026 READ: bw=11.6MiB/s (12.1MB/s), 11.6MiB/s-11.6MiB/s (12.1MB/s-12.1MB/s), io=11.6MiB (12.1MB), run=1001-1001msec 00:10:40.026 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:40.026 00:10:40.026 Disk stats (read/write): 00:10:40.026 nvme0n1: ios=2610/2900, merge=0/0, ticks=489/346, in_queue=835, util=91.18% 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.026 rmmod nvme_tcp 00:10:40.026 rmmod nvme_fabrics 00:10:40.026 rmmod nvme_keyring 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 80128 ']' 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 80128 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 80128 ']' 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 80128 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80128 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:40.026 killing process with pid 80128 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80128' 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 80128 00:10:40.026 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 80128 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.285 00:10:40.285 real 0m5.709s 00:10:40.285 user 0m18.311s 00:10:40.285 sys 0m2.231s 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.285 21:50:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.285 ************************************ 00:10:40.285 END TEST nvmf_nmic 00:10:40.285 ************************************ 00:10:40.285 21:50:45 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:40.285 21:50:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:40.285 21:50:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.285 21:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.285 ************************************ 00:10:40.285 START TEST nvmf_fio_target 00:10:40.285 ************************************ 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:40.285 * Looking for test storage... 00:10:40.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.285 21:50:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.286 21:50:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.286 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.286 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.286 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.286 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.546 Cannot find device "nvmf_tgt_br" 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.546 Cannot find device "nvmf_tgt_br2" 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.546 Cannot find device "nvmf_tgt_br" 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.546 Cannot find device "nvmf_tgt_br2" 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.546 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:40.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:10:40.833 00:10:40.833 --- 10.0.0.2 ping statistics --- 00:10:40.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.833 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:40.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:40.833 00:10:40.833 --- 10.0.0.3 ping statistics --- 00:10:40.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.833 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:40.833 00:10:40.833 --- 10.0.0.1 ping statistics --- 00:10:40.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.833 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.833 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=80398 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 80398 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 80398 ']' 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.834 21:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.834 [2024-07-24 21:50:46.392566] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:40.834 [2024-07-24 21:50:46.392687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.834 [2024-07-24 21:50:46.529184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.092 [2024-07-24 21:50:46.636008] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.092 [2024-07-24 21:50:46.636216] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.092 [2024-07-24 21:50:46.636349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.092 [2024-07-24 21:50:46.636363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.092 [2024-07-24 21:50:46.636371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.092 [2024-07-24 21:50:46.636530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.092 [2024-07-24 21:50:46.636605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.092 [2024-07-24 21:50:46.636691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.092 [2024-07-24 21:50:46.636749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.092 [2024-07-24 21:50:46.692098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.657 21:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.915 [2024-07-24 21:50:47.606695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.173 21:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.431 21:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:42.431 21:50:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.689 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:42.689 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.946 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:42.946 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.202 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:43.202 21:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:43.460 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.718 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:43.718 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.976 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:43.976 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.233 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:44.233 21:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:44.501 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.785 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.785 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.785 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.785 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.043 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.301 [2024-07-24 21:50:50.933180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.301 21:50:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:45.559 21:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:45.816 21:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:10:46.074 21:50:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:10:47.972 21:50:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:47.972 [global] 00:10:47.972 thread=1 00:10:47.972 invalidate=1 00:10:47.972 rw=write 00:10:47.972 time_based=1 00:10:47.972 runtime=1 00:10:47.972 ioengine=libaio 00:10:47.972 direct=1 00:10:47.972 bs=4096 00:10:47.972 iodepth=1 00:10:47.972 norandommap=0 00:10:47.972 numjobs=1 00:10:47.972 00:10:47.972 verify_dump=1 00:10:47.972 verify_backlog=512 00:10:47.972 verify_state_save=0 00:10:47.972 do_verify=1 00:10:47.972 verify=crc32c-intel 00:10:47.972 [job0] 00:10:47.972 filename=/dev/nvme0n1 00:10:47.972 [job1] 00:10:47.972 filename=/dev/nvme0n2 00:10:47.972 [job2] 00:10:47.972 filename=/dev/nvme0n3 00:10:47.972 [job3] 00:10:47.972 filename=/dev/nvme0n4 00:10:47.972 Could not set queue depth (nvme0n1) 00:10:47.972 Could not set queue depth (nvme0n2) 00:10:47.972 Could not set queue depth (nvme0n3) 00:10:47.972 Could not set queue depth (nvme0n4) 00:10:48.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.229 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.229 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.229 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.230 fio-3.35 00:10:48.230 Starting 4 threads 00:10:49.608 00:10:49.608 job0: (groupid=0, jobs=1): err= 0: pid=80583: Wed Jul 24 21:50:54 2024 00:10:49.608 read: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:10:49.608 slat (nsec): min=11764, max=43840, avg=14742.99, stdev=3466.75 00:10:49.608 clat (usec): min=143, max=218, avg=168.76, stdev=10.77 00:10:49.608 lat (usec): min=157, max=245, avg=183.50, stdev=12.02 00:10:49.608 clat percentiles (usec): 00:10:49.608 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:10:49.608 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:10:49.608 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 00:10:49.608 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 210], 99.95th=[ 219], 00:10:49.608 | 99.99th=[ 219] 00:10:49.608 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:49.608 slat (nsec): min=13811, max=95349, avg=19982.77, stdev=4456.12 00:10:49.608 clat (usec): min=94, max=2072, avg=121.42, stdev=38.63 00:10:49.608 lat (usec): min=112, max=2090, avg=141.40, stdev=39.09 00:10:49.608 clat percentiles (usec): 00:10:49.608 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:10:49.608 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:10:49.608 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 141], 00:10:49.608 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 326], 99.95th=[ 586], 00:10:49.608 | 99.99th=[ 2073] 00:10:49.608 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:49.608 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:49.608 lat (usec) : 100=0.75%, 250=99.16%, 500=0.05%, 750=0.02% 00:10:49.608 lat (msec) : 4=0.02% 00:10:49.608 cpu : usr=2.70%, sys=7.90%, ctx=6097, majf=0, minf=13 00:10:49.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.608 issued rwts: total=3025,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.608 job1: (groupid=0, jobs=1): err= 0: pid=80584: Wed Jul 24 21:50:54 2024 00:10:49.608 read: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec) 00:10:49.608 slat (nsec): min=8462, max=37182, avg=10286.74, stdev=1948.20 00:10:49.608 clat (usec): min=225, max=1758, avg=269.60, stdev=39.28 00:10:49.608 lat (usec): min=240, max=1776, avg=279.88, stdev=39.50 00:10:49.608 clat percentiles (usec): 00:10:49.608 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 255], 00:10:49.608 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:49.608 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:10:49.608 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 693], 99.95th=[ 1762], 00:10:49.608 | 99.99th=[ 1762] 00:10:49.608 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:49.608 slat (usec): min=10, max=102, avg=16.51, stdev= 4.25 00:10:49.608 clat (usec): min=137, max=347, avg=209.69, stdev=16.46 00:10:49.608 lat (usec): min=170, max=379, avg=226.20, stdev=16.73 00:10:49.608 clat percentiles (usec): 00:10:49.608 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:10:49.608 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:49.608 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 237], 00:10:49.609 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 314], 00:10:49.609 | 99.99th=[ 347] 00:10:49.609 bw ( KiB/s): min= 8192, max= 8192, per=20.12%, avg=8192.00, stdev= 0.00, samples=1 00:10:49.609 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:49.609 lat (usec) : 250=56.22%, 500=43.73%, 750=0.03% 00:10:49.609 lat (msec) : 2=0.03% 00:10:49.609 cpu : usr=1.00%, sys=4.70%, ctx=3948, majf=0, minf=7 00:10:49.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 issued rwts: total=1899,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.609 job2: (groupid=0, jobs=1): err= 0: pid=80585: Wed Jul 24 21:50:54 2024 00:10:49.609 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:49.609 slat (nsec): min=12375, max=66839, avg=15500.38, stdev=4121.18 00:10:49.609 clat (usec): min=147, max=694, avg=181.24, stdev=24.60 00:10:49.609 lat (usec): min=163, max=709, avg=196.75, stdev=26.65 00:10:49.609 clat percentiles (usec): 00:10:49.609 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:10:49.609 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:10:49.609 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 227], 00:10:49.609 | 99.00th=[ 251], 99.50th=[ 281], 99.90th=[ 433], 99.95th=[ 545], 00:10:49.609 | 99.99th=[ 693] 00:10:49.609 write: IOPS=3019, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:10:49.609 slat (usec): min=14, max=100, avg=22.94, stdev= 7.22 00:10:49.609 clat (usec): min=101, max=1222, avg=138.01, stdev=31.87 00:10:49.609 lat (usec): min=121, max=1242, avg=160.95, stdev=35.59 00:10:49.609 clat percentiles (usec): 00:10:49.609 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 121], 00:10:49.609 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 139], 00:10:49.609 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 178], 00:10:49.609 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 388], 99.95th=[ 717], 00:10:49.609 | 99.99th=[ 1221] 00:10:49.609 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:49.609 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:49.609 lat (usec) : 250=99.32%, 500=0.61%, 750=0.05% 00:10:49.609 lat (msec) : 2=0.02% 00:10:49.609 cpu : usr=2.20%, sys=8.40%, ctx=5583, majf=0, minf=10 00:10:49.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 issued rwts: total=2560,3023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.609 job3: (groupid=0, jobs=1): err= 0: pid=80586: Wed Jul 24 21:50:54 2024 00:10:49.609 read: IOPS=1898, BW=7592KiB/s (7775kB/s)(7600KiB/1001msec) 00:10:49.609 slat (nsec): min=9109, max=45403, avg=14014.47, stdev=2322.98 00:10:49.609 clat (usec): min=183, max=1840, avg=265.33, stdev=40.54 00:10:49.609 lat (usec): min=206, max=1864, avg=279.34, stdev=40.78 00:10:49.609 clat percentiles (usec): 00:10:49.609 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:10:49.609 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:49.609 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:10:49.609 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 635], 99.95th=[ 1844], 00:10:49.609 | 99.99th=[ 1844] 00:10:49.609 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:49.609 slat (nsec): min=11107, max=76169, avg=19359.40, stdev=3901.00 00:10:49.609 clat (usec): min=117, max=332, avg=206.60, stdev=15.69 00:10:49.609 lat (usec): min=173, max=372, avg=225.96, stdev=16.16 00:10:49.609 clat percentiles (usec): 00:10:49.609 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:10:49.609 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:10:49.609 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:10:49.609 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 306], 00:10:49.609 | 99.99th=[ 334] 00:10:49.609 bw ( KiB/s): min= 8192, max= 8192, per=20.12%, avg=8192.00, stdev= 0.00, samples=1 00:10:49.609 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:49.609 lat (usec) : 250=59.35%, 500=40.58%, 750=0.05% 00:10:49.609 lat (msec) : 2=0.03% 00:10:49.609 cpu : usr=2.10%, sys=5.20%, ctx=3950, majf=0, minf=5 00:10:49.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.609 issued rwts: total=1900,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.609 00:10:49.609 Run status group 0 (all jobs): 00:10:49.609 READ: bw=36.6MiB/s (38.4MB/s), 7588KiB/s-11.8MiB/s (7771kB/s-12.4MB/s), io=36.7MiB (38.4MB), run=1001-1001msec 00:10:49.609 WRITE: bw=39.8MiB/s (41.7MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.8MiB (41.7MB), run=1001-1001msec 00:10:49.609 00:10:49.609 Disk stats (read/write): 00:10:49.609 nvme0n1: ios=2610/2749, merge=0/0, ticks=475/350, in_queue=825, util=89.08% 00:10:49.609 nvme0n2: ios=1584/1915, merge=0/0, ticks=417/379, in_queue=796, util=89.69% 00:10:49.609 nvme0n3: ios=2233/2560, merge=0/0, ticks=423/384, in_queue=807, util=89.40% 00:10:49.609 nvme0n4: ios=1536/1915, merge=0/0, ticks=411/393, in_queue=804, util=89.66% 00:10:49.609 21:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:49.609 [global] 00:10:49.609 thread=1 00:10:49.609 invalidate=1 00:10:49.609 rw=randwrite 00:10:49.609 time_based=1 00:10:49.609 runtime=1 00:10:49.609 ioengine=libaio 00:10:49.609 direct=1 00:10:49.609 bs=4096 00:10:49.609 iodepth=1 00:10:49.609 norandommap=0 00:10:49.609 numjobs=1 00:10:49.609 00:10:49.609 verify_dump=1 00:10:49.609 verify_backlog=512 00:10:49.609 verify_state_save=0 00:10:49.609 do_verify=1 00:10:49.609 verify=crc32c-intel 00:10:49.609 [job0] 00:10:49.609 filename=/dev/nvme0n1 00:10:49.609 [job1] 00:10:49.609 filename=/dev/nvme0n2 00:10:49.609 [job2] 00:10:49.609 filename=/dev/nvme0n3 00:10:49.609 [job3] 00:10:49.609 filename=/dev/nvme0n4 00:10:49.609 Could not set queue depth (nvme0n1) 00:10:49.609 Could not set queue depth (nvme0n2) 00:10:49.609 Could not set queue depth (nvme0n3) 00:10:49.609 Could not set queue depth (nvme0n4) 00:10:49.609 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.609 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.609 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.609 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.609 fio-3.35 00:10:49.609 Starting 4 threads 00:10:50.983 00:10:50.983 job0: (groupid=0, jobs=1): err= 0: pid=80639: Wed Jul 24 21:50:56 2024 00:10:50.983 read: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec) 00:10:50.983 slat (nsec): min=12233, max=46022, avg=14802.89, stdev=3150.73 00:10:50.983 clat (usec): min=216, max=361, avg=249.60, stdev=15.06 00:10:50.983 lat (usec): min=229, max=375, avg=264.40, stdev=16.07 00:10:50.983 clat percentiles (usec): 00:10:50.983 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:10:50.983 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:10:50.983 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:10:50.983 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 347], 00:10:50.983 | 99.99th=[ 363] 00:10:50.983 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.983 slat (usec): min=12, max=114, avg=21.09, stdev= 4.71 00:10:50.983 clat (usec): min=129, max=1638, avg=200.07, stdev=39.64 00:10:50.983 lat (usec): min=160, max=1656, avg=221.17, stdev=40.09 00:10:50.983 clat percentiles (usec): 00:10:50.983 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:10:50.983 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:10:50.983 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:10:50.983 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 494], 99.95th=[ 742], 00:10:50.983 | 99.99th=[ 1647] 00:10:50.983 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:50.983 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:50.983 lat (usec) : 250=78.24%, 500=21.71%, 750=0.02% 00:10:50.983 lat (msec) : 2=0.02% 00:10:50.983 cpu : usr=1.90%, sys=6.20%, ctx=4090, majf=0, minf=11 00:10:50.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.983 issued rwts: total=2042,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.983 job1: (groupid=0, jobs=1): err= 0: pid=80640: Wed Jul 24 21:50:56 2024 00:10:50.983 read: IOPS=1935, BW=7740KiB/s (7926kB/s)(7748KiB/1001msec) 00:10:50.983 slat (nsec): min=10746, max=31435, avg=13483.13, stdev=1913.54 00:10:50.983 clat (usec): min=218, max=959, avg=251.51, stdev=28.41 00:10:50.983 lat (usec): min=232, max=971, avg=265.00, stdev=28.31 00:10:50.983 clat percentiles (usec): 00:10:50.983 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:10:50.983 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:10:50.983 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:10:50.983 | 99.00th=[ 306], 99.50th=[ 355], 99.90th=[ 824], 99.95th=[ 963], 00:10:50.983 | 99.99th=[ 963] 00:10:50.983 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.983 slat (usec): min=11, max=188, avg=20.51, stdev= 8.48 00:10:50.983 clat (usec): min=124, max=615, avg=213.98, stdev=39.64 00:10:50.983 lat (usec): min=155, max=648, avg=234.49, stdev=44.46 00:10:50.983 clat percentiles (usec): 00:10:50.983 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:10:50.983 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:10:50.984 | 70.00th=[ 217], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 289], 00:10:50.984 | 99.00th=[ 318], 99.50th=[ 351], 99.90th=[ 502], 99.95th=[ 502], 00:10:50.984 | 99.99th=[ 619] 00:10:50.984 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:50.984 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:50.984 lat (usec) : 250=68.41%, 500=31.44%, 750=0.10%, 1000=0.05% 00:10:50.984 cpu : usr=1.60%, sys=5.70%, ctx=3986, majf=0, minf=11 00:10:50.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 issued rwts: total=1937,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.984 job2: (groupid=0, jobs=1): err= 0: pid=80641: Wed Jul 24 21:50:56 2024 00:10:50.984 read: IOPS=2040, BW=8164KiB/s (8360kB/s)(8172KiB/1001msec) 00:10:50.984 slat (nsec): min=8642, max=40701, avg=10878.56, stdev=2332.05 00:10:50.984 clat (usec): min=177, max=360, avg=254.00, stdev=15.88 00:10:50.984 lat (usec): min=201, max=369, avg=264.88, stdev=16.43 00:10:50.984 clat percentiles (usec): 00:10:50.984 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:10:50.984 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:10:50.984 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 281], 00:10:50.984 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 355], 00:10:50.984 | 99.99th=[ 359] 00:10:50.984 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.984 slat (nsec): min=10571, max=70664, avg=16776.44, stdev=4760.51 00:10:50.984 clat (usec): min=155, max=1725, avg=204.55, stdev=41.13 00:10:50.984 lat (usec): min=179, max=1748, avg=221.32, stdev=41.61 00:10:50.984 clat percentiles (usec): 00:10:50.984 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:10:50.984 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 206], 00:10:50.984 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:10:50.984 | 99.00th=[ 260], 99.50th=[ 289], 99.90th=[ 570], 99.95th=[ 660], 00:10:50.984 | 99.99th=[ 1729] 00:10:50.984 bw ( KiB/s): min= 8208, max= 8208, per=25.07%, avg=8208.00, stdev= 0.00, samples=1 00:10:50.984 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:10:50.984 lat (usec) : 250=71.77%, 500=28.13%, 750=0.07% 00:10:50.984 lat (msec) : 2=0.02% 00:10:50.984 cpu : usr=1.50%, sys=4.50%, ctx=4100, majf=0, minf=15 00:10:50.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 issued rwts: total=2043,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.984 job3: (groupid=0, jobs=1): err= 0: pid=80642: Wed Jul 24 21:50:56 2024 00:10:50.984 read: IOPS=1934, BW=7736KiB/s (7922kB/s)(7744KiB/1001msec) 00:10:50.984 slat (nsec): min=8968, max=46219, avg=12154.70, stdev=3544.20 00:10:50.984 clat (usec): min=177, max=939, avg=253.01, stdev=26.54 00:10:50.984 lat (usec): min=199, max=955, avg=265.16, stdev=27.26 00:10:50.984 clat percentiles (usec): 00:10:50.984 | 1.00th=[ 231], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:10:50.984 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:10:50.984 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 277], 00:10:50.984 | 99.00th=[ 306], 99.50th=[ 363], 99.90th=[ 725], 99.95th=[ 938], 00:10:50.984 | 99.99th=[ 938] 00:10:50.984 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.984 slat (usec): min=11, max=334, avg=23.95, stdev=11.58 00:10:50.984 clat (usec): min=4, max=601, avg=210.35, stdev=37.93 00:10:50.984 lat (usec): min=168, max=644, avg=234.31, stdev=44.85 00:10:50.984 clat percentiles (usec): 00:10:50.984 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:10:50.984 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:10:50.984 | 70.00th=[ 212], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 277], 00:10:50.984 | 99.00th=[ 306], 99.50th=[ 351], 99.90th=[ 482], 99.95th=[ 603], 00:10:50.984 | 99.99th=[ 603] 00:10:50.984 bw ( KiB/s): min= 8208, max= 8208, per=25.07%, avg=8208.00, stdev= 0.00, samples=1 00:10:50.984 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:10:50.984 lat (usec) : 10=0.03%, 250=66.27%, 500=33.61%, 750=0.08%, 1000=0.03% 00:10:50.984 cpu : usr=1.60%, sys=6.10%, ctx=3986, majf=0, minf=8 00:10:50.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.984 issued rwts: total=1936,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.984 00:10:50.984 Run status group 0 (all jobs): 00:10:50.984 READ: bw=31.1MiB/s (32.6MB/s), 7736KiB/s-8164KiB/s (7922kB/s-8360kB/s), io=31.1MiB (32.6MB), run=1001-1001msec 00:10:50.984 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:50.984 00:10:50.984 Disk stats (read/write): 00:10:50.984 nvme0n1: ios=1602/2048, merge=0/0, ticks=415/408, in_queue=823, util=89.08% 00:10:50.984 nvme0n2: ios=1585/1926, merge=0/0, ticks=413/379, in_queue=792, util=89.40% 00:10:50.984 nvme0n3: ios=1552/2048, merge=0/0, ticks=362/354, in_queue=716, util=89.24% 00:10:50.984 nvme0n4: ios=1536/1922, merge=0/0, ticks=370/409, in_queue=779, util=89.80% 00:10:50.984 21:50:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:50.984 [global] 00:10:50.984 thread=1 00:10:50.984 invalidate=1 00:10:50.984 rw=write 00:10:50.984 time_based=1 00:10:50.984 runtime=1 00:10:50.984 ioengine=libaio 00:10:50.984 direct=1 00:10:50.984 bs=4096 00:10:50.984 iodepth=128 00:10:50.984 norandommap=0 00:10:50.984 numjobs=1 00:10:50.984 00:10:50.984 verify_dump=1 00:10:50.984 verify_backlog=512 00:10:50.984 verify_state_save=0 00:10:50.984 do_verify=1 00:10:50.984 verify=crc32c-intel 00:10:50.984 [job0] 00:10:50.984 filename=/dev/nvme0n1 00:10:50.984 [job1] 00:10:50.984 filename=/dev/nvme0n2 00:10:50.984 [job2] 00:10:50.984 filename=/dev/nvme0n3 00:10:50.984 [job3] 00:10:50.984 filename=/dev/nvme0n4 00:10:50.984 Could not set queue depth (nvme0n1) 00:10:50.984 Could not set queue depth (nvme0n2) 00:10:50.984 Could not set queue depth (nvme0n3) 00:10:50.984 Could not set queue depth (nvme0n4) 00:10:50.984 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.984 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.984 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.984 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.984 fio-3.35 00:10:50.984 Starting 4 threads 00:10:52.357 00:10:52.357 job0: (groupid=0, jobs=1): err= 0: pid=80702: Wed Jul 24 21:50:57 2024 00:10:52.357 read: IOPS=5302, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1002msec) 00:10:52.357 slat (usec): min=5, max=4674, avg=89.05, stdev=420.25 00:10:52.357 clat (usec): min=320, max=14885, avg=11833.31, stdev=1235.78 00:10:52.357 lat (usec): min=2589, max=14913, avg=11922.36, stdev=1166.61 00:10:52.357 clat percentiles (usec): 00:10:52.357 | 1.00th=[ 5932], 5.00th=[11076], 10.00th=[11338], 20.00th=[11469], 00:10:52.357 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:10:52.357 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12911], 95.00th=[14222], 00:10:52.357 | 99.00th=[14615], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:10:52.357 | 99.99th=[14877] 00:10:52.357 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:52.357 slat (usec): min=10, max=2840, avg=86.40, stdev=360.02 00:10:52.357 clat (usec): min=8398, max=14536, avg=11324.73, stdev=801.03 00:10:52.357 lat (usec): min=8736, max=14759, avg=11411.13, stdev=720.94 00:10:52.357 clat percentiles (usec): 00:10:52.357 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[10683], 20.00th=[10945], 00:10:52.357 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:10:52.357 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[13435], 00:10:52.357 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:10:52.357 | 99.99th=[14484] 00:10:52.357 bw ( KiB/s): min=21768, max=23288, per=35.06%, avg=22528.00, stdev=1074.80, samples=2 00:10:52.357 iops : min= 5442, max= 5822, avg=5632.00, stdev=268.70, samples=2 00:10:52.357 lat (usec) : 500=0.01% 00:10:52.357 lat (msec) : 4=0.29%, 10=3.34%, 20=96.35% 00:10:52.357 cpu : usr=3.90%, sys=15.38%, ctx=343, majf=0, minf=19 00:10:52.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:52.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.357 issued rwts: total=5313,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.357 job1: (groupid=0, jobs=1): err= 0: pid=80703: Wed Jul 24 21:50:57 2024 00:10:52.357 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:52.357 slat (usec): min=6, max=6147, avg=189.14, stdev=969.46 00:10:52.357 clat (usec): min=18290, max=25911, avg=24785.68, stdev=1077.10 00:10:52.357 lat (usec): min=23224, max=25934, avg=24974.82, stdev=473.21 00:10:52.357 clat percentiles (usec): 00:10:52.357 | 1.00th=[19268], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:10:52.357 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25035], 00:10:52.357 | 70.00th=[25297], 80.00th=[25297], 90.00th=[25560], 95.00th=[25560], 00:10:52.357 | 99.00th=[25822], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:10:52.357 | 99.99th=[25822] 00:10:52.357 write: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec); 0 zone resets 00:10:52.357 slat (usec): min=12, max=5969, avg=184.60, stdev=899.85 00:10:52.357 clat (usec): min=377, max=25297, avg=23335.47, stdev=2898.46 00:10:52.357 lat (usec): min=4002, max=25410, avg=23520.07, stdev=2760.41 00:10:52.357 clat percentiles (usec): 00:10:52.357 | 1.00th=[ 4555], 5.00th=[18744], 10.00th=[22938], 20.00th=[23462], 00:10:52.357 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:10:52.357 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:10:52.357 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:10:52.357 | 99.99th=[25297] 00:10:52.357 bw ( KiB/s): min=12040, max=12040, per=18.74%, avg=12040.00, stdev= 0.00, samples=1 00:10:52.357 iops : min= 3010, max= 3010, avg=3010.00, stdev= 0.00, samples=1 00:10:52.357 lat (usec) : 500=0.02% 00:10:52.357 lat (msec) : 10=0.69%, 20=4.10%, 50=95.20% 00:10:52.357 cpu : usr=2.30%, sys=7.69%, ctx=178, majf=0, minf=7 00:10:52.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:52.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.357 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.357 job2: (groupid=0, jobs=1): err= 0: pid=80704: Wed Jul 24 21:50:57 2024 00:10:52.357 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:52.357 slat (usec): min=5, max=3252, avg=100.20, stdev=474.50 00:10:52.357 clat (usec): min=9658, max=14302, avg=13393.90, stdev=596.27 00:10:52.357 lat (usec): min=11869, max=14321, avg=13494.11, stdev=374.99 00:10:52.358 clat percentiles (usec): 00:10:52.358 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13042], 00:10:52.358 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:10:52.358 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[13960], 00:10:52.358 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14222], 99.95th=[14353], 00:10:52.358 | 99.99th=[14353] 00:10:52.358 write: IOPS=5102, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:52.358 slat (usec): min=12, max=3066, avg=97.23, stdev=410.25 00:10:52.358 clat (usec): min=1977, max=14280, avg=12673.63, stdev=1186.40 00:10:52.358 lat (usec): min=2001, max=14297, avg=12770.85, stdev=1116.20 00:10:52.358 clat percentiles (usec): 00:10:52.358 | 1.00th=[ 5473], 5.00th=[11207], 10.00th=[12256], 20.00th=[12387], 00:10:52.358 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:52.358 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:10:52.358 | 99.00th=[13698], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:10:52.358 | 99.99th=[14222] 00:10:52.358 bw ( KiB/s): min=19448, max=20521, per=31.10%, avg=19984.50, stdev=758.73, samples=2 00:10:52.358 iops : min= 4862, max= 5130, avg=4996.00, stdev=189.50, samples=2 00:10:52.358 lat (msec) : 2=0.01%, 4=0.30%, 10=0.75%, 20=98.94% 00:10:52.358 cpu : usr=4.99%, sys=13.77%, ctx=304, majf=0, minf=15 00:10:52.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:52.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.358 issued rwts: total=4608,5118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.358 job3: (groupid=0, jobs=1): err= 0: pid=80705: Wed Jul 24 21:50:57 2024 00:10:52.358 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:52.358 slat (usec): min=6, max=6009, avg=188.24, stdev=964.54 00:10:52.358 clat (usec): min=17909, max=26191, avg=24700.88, stdev=1108.98 00:10:52.358 lat (usec): min=23025, max=26222, avg=24889.12, stdev=553.05 00:10:52.358 clat percentiles (usec): 00:10:52.358 | 1.00th=[19268], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:10:52.358 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25035], 00:10:52.358 | 70.00th=[25297], 80.00th=[25297], 90.00th=[25560], 95.00th=[25560], 00:10:52.358 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:52.358 | 99.99th=[26084] 00:10:52.358 write: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1004msec); 0 zone resets 00:10:52.358 slat (usec): min=13, max=5760, avg=185.82, stdev=904.45 00:10:52.358 clat (usec): min=320, max=25760, avg=23412.05, stdev=2798.68 00:10:52.358 lat (usec): min=4631, max=25789, avg=23597.87, stdev=2651.23 00:10:52.358 clat percentiles (usec): 00:10:52.358 | 1.00th=[ 5145], 5.00th=[18744], 10.00th=[22938], 20.00th=[23200], 00:10:52.358 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:10:52.358 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:10:52.358 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:10:52.358 | 99.99th=[25822] 00:10:52.358 bw ( KiB/s): min= 8448, max=12064, per=15.96%, avg=10256.00, stdev=2556.90, samples=2 00:10:52.358 iops : min= 2112, max= 3016, avg=2564.00, stdev=639.22, samples=2 00:10:52.358 lat (usec) : 500=0.02% 00:10:52.358 lat (msec) : 10=0.61%, 20=4.15%, 50=95.22% 00:10:52.358 cpu : usr=2.89%, sys=7.28%, ctx=185, majf=0, minf=9 00:10:52.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:52.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.358 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.358 00:10:52.358 Run status group 0 (all jobs): 00:10:52.358 READ: bw=58.5MiB/s (61.4MB/s), 9.96MiB/s-20.7MiB/s (10.4MB/s-21.7MB/s), io=58.8MiB (61.6MB), run=1002-1004msec 00:10:52.358 WRITE: bw=62.7MiB/s (65.8MB/s), 10.5MiB/s-22.0MiB/s (11.0MB/s-23.0MB/s), io=63.0MiB (66.1MB), run=1002-1004msec 00:10:52.358 00:10:52.358 Disk stats (read/write): 00:10:52.358 nvme0n1: ios=4658/4608, merge=0/0, ticks=12329/11068, in_queue=23397, util=87.06% 00:10:52.358 nvme0n2: ios=2080/2368, merge=0/0, ticks=11515/12235, in_queue=23750, util=87.42% 00:10:52.358 nvme0n3: ios=4096/4128, merge=0/0, ticks=12417/11327, in_queue=23744, util=89.09% 00:10:52.358 nvme0n4: ios=2048/2368, merge=0/0, ticks=11210/12082, in_queue=23292, util=89.44% 00:10:52.358 21:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:52.358 [global] 00:10:52.358 thread=1 00:10:52.358 invalidate=1 00:10:52.358 rw=randwrite 00:10:52.358 time_based=1 00:10:52.358 runtime=1 00:10:52.358 ioengine=libaio 00:10:52.358 direct=1 00:10:52.358 bs=4096 00:10:52.358 iodepth=128 00:10:52.358 norandommap=0 00:10:52.358 numjobs=1 00:10:52.358 00:10:52.358 verify_dump=1 00:10:52.358 verify_backlog=512 00:10:52.358 verify_state_save=0 00:10:52.358 do_verify=1 00:10:52.358 verify=crc32c-intel 00:10:52.358 [job0] 00:10:52.358 filename=/dev/nvme0n1 00:10:52.358 [job1] 00:10:52.358 filename=/dev/nvme0n2 00:10:52.358 [job2] 00:10:52.358 filename=/dev/nvme0n3 00:10:52.358 [job3] 00:10:52.358 filename=/dev/nvme0n4 00:10:52.358 Could not set queue depth (nvme0n1) 00:10:52.358 Could not set queue depth (nvme0n2) 00:10:52.358 Could not set queue depth (nvme0n3) 00:10:52.358 Could not set queue depth (nvme0n4) 00:10:52.358 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.358 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.358 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.358 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.358 fio-3.35 00:10:52.358 Starting 4 threads 00:10:53.733 00:10:53.733 job0: (groupid=0, jobs=1): err= 0: pid=80763: Wed Jul 24 21:50:59 2024 00:10:53.733 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:53.733 slat (usec): min=12, max=20488, avg=162.89, stdev=1150.33 00:10:53.733 clat (usec): min=12095, max=39428, avg=22406.94, stdev=4962.05 00:10:53.733 lat (usec): min=12114, max=45674, avg=22569.83, stdev=5035.19 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[12649], 5.00th=[16319], 10.00th=[16581], 20.00th=[16909], 00:10:53.733 | 30.00th=[17433], 40.00th=[21103], 50.00th=[24249], 60.00th=[24773], 00:10:53.733 | 70.00th=[25035], 80.00th=[25822], 90.00th=[28443], 95.00th=[31065], 00:10:53.733 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35914], 99.95th=[39060], 00:10:53.733 | 99.99th=[39584] 00:10:53.733 write: IOPS=3374, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1005msec); 0 zone resets 00:10:53.733 slat (usec): min=5, max=14288, avg=140.35, stdev=934.15 00:10:53.733 clat (usec): min=1077, max=33408, avg=17280.67, stdev=5422.54 00:10:53.733 lat (usec): min=5583, max=33448, avg=17421.01, stdev=5388.48 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[ 6390], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:10:53.733 | 30.00th=[12649], 40.00th=[13042], 50.00th=[16188], 60.00th=[18220], 00:10:53.733 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23987], 95.00th=[24249], 00:10:53.733 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[32637], 00:10:53.733 | 99.99th=[33424] 00:10:53.733 bw ( KiB/s): min=12280, max=13824, per=20.24%, avg=13052.00, stdev=1091.77, samples=2 00:10:53.733 iops : min= 3070, max= 3456, avg=3263.00, stdev=272.94, samples=2 00:10:53.733 lat (msec) : 2=0.02%, 10=1.41%, 20=49.17%, 50=49.40% 00:10:53.733 cpu : usr=2.79%, sys=10.16%, ctx=139, majf=0, minf=14 00:10:53.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.733 issued rwts: total=3072,3391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.733 job1: (groupid=0, jobs=1): err= 0: pid=80764: Wed Jul 24 21:50:59 2024 00:10:53.733 read: IOPS=5574, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1005msec) 00:10:53.733 slat (usec): min=8, max=6126, avg=86.35, stdev=517.25 00:10:53.733 clat (usec): min=2216, max=19759, avg=11918.46, stdev=1390.61 00:10:53.733 lat (usec): min=6233, max=22500, avg=12004.81, stdev=1400.59 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[11076], 20.00th=[11600], 00:10:53.733 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:53.733 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12649], 95.00th=[12911], 00:10:53.733 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:10:53.733 | 99.99th=[19792] 00:10:53.733 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:53.733 slat (usec): min=7, max=7724, avg=84.72, stdev=488.51 00:10:53.733 clat (usec): min=5645, max=14885, avg=10774.80, stdev=1017.92 00:10:53.733 lat (usec): min=7810, max=15106, avg=10859.53, stdev=923.47 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[ 7242], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:10:53.733 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:10:53.733 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:10:53.733 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:10:53.733 | 99.99th=[14877] 00:10:53.733 bw ( KiB/s): min=21512, max=23544, per=34.93%, avg=22528.00, stdev=1436.84, samples=2 00:10:53.733 iops : min= 5378, max= 5886, avg=5632.00, stdev=359.21, samples=2 00:10:53.733 lat (msec) : 4=0.01%, 10=11.55%, 20=88.45% 00:10:53.733 cpu : usr=3.58%, sys=15.72%, ctx=289, majf=0, minf=11 00:10:53.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.733 issued rwts: total=5602,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.733 job2: (groupid=0, jobs=1): err= 0: pid=80765: Wed Jul 24 21:50:59 2024 00:10:53.733 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:10:53.733 slat (usec): min=6, max=12857, avg=205.65, stdev=1087.52 00:10:53.733 clat (usec): min=13811, max=66260, avg=27109.93, stdev=7784.42 00:10:53.733 lat (usec): min=13839, max=67952, avg=27315.58, stdev=7866.16 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[15926], 5.00th=[20579], 10.00th=[22938], 20.00th=[23987], 00:10:53.733 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:10:53.733 | 70.00th=[25560], 80.00th=[26870], 90.00th=[35914], 95.00th=[47973], 00:10:53.733 | 99.00th=[60556], 99.50th=[64226], 99.90th=[66323], 99.95th=[66323], 00:10:53.733 | 99.99th=[66323] 00:10:53.733 write: IOPS=2176, BW=8706KiB/s (8915kB/s)(8828KiB/1014msec); 0 zone resets 00:10:53.733 slat (usec): min=11, max=14972, avg=255.16, stdev=1283.04 00:10:53.733 clat (usec): min=9543, max=78119, avg=32600.31, stdev=15091.62 00:10:53.733 lat (usec): min=15304, max=78157, avg=32855.47, stdev=15185.89 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[18482], 5.00th=[20579], 10.00th=[22938], 20.00th=[23200], 00:10:53.733 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24773], 00:10:53.733 | 70.00th=[28443], 80.00th=[46400], 90.00th=[58983], 95.00th=[64226], 00:10:53.733 | 99.00th=[73925], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:10:53.733 | 99.99th=[78119] 00:10:53.733 bw ( KiB/s): min= 4856, max=11799, per=12.91%, avg=8327.50, stdev=4909.44, samples=2 00:10:53.733 iops : min= 1214, max= 2949, avg=2081.50, stdev=1226.83, samples=2 00:10:53.733 lat (msec) : 10=0.02%, 20=3.64%, 50=86.16%, 100=10.18% 00:10:53.733 cpu : usr=2.27%, sys=6.52%, ctx=194, majf=0, minf=15 00:10:53.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.733 issued rwts: total=2048,2207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.733 job3: (groupid=0, jobs=1): err= 0: pid=80766: Wed Jul 24 21:50:59 2024 00:10:53.733 read: IOPS=4657, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1003msec) 00:10:53.733 slat (usec): min=7, max=6850, avg=96.92, stdev=605.83 00:10:53.733 clat (usec): min=1468, max=21360, avg=13427.42, stdev=1641.05 00:10:53.733 lat (usec): min=6260, max=25446, avg=13524.33, stdev=1664.59 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[12649], 20.00th=[13042], 00:10:53.733 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13435], 60.00th=[13698], 00:10:53.733 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:10:53.733 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:10:53.733 | 99.99th=[21365] 00:10:53.733 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:53.733 slat (usec): min=10, max=12066, avg=99.57, stdev=589.69 00:10:53.733 clat (usec): min=6512, max=21885, avg=12577.72, stdev=1708.19 00:10:53.733 lat (usec): min=7271, max=21913, avg=12677.29, stdev=1631.68 00:10:53.733 clat percentiles (usec): 00:10:53.733 | 1.00th=[ 8225], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:10:53.733 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:10:53.733 | 70.00th=[12649], 80.00th=[13042], 90.00th=[15008], 95.00th=[15664], 00:10:53.733 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:10:53.733 | 99.99th=[21890] 00:10:53.733 bw ( KiB/s): min=19960, max=20521, per=31.38%, avg=20240.50, stdev=396.69, samples=2 00:10:53.733 iops : min= 4990, max= 5130, avg=5060.00, stdev=98.99, samples=2 00:10:53.733 lat (msec) : 2=0.01%, 10=3.83%, 20=94.86%, 50=1.30% 00:10:53.733 cpu : usr=4.99%, sys=13.37%, ctx=208, majf=0, minf=3 00:10:53.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.733 issued rwts: total=4671,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.733 00:10:53.733 Run status group 0 (all jobs): 00:10:53.733 READ: bw=59.3MiB/s (62.2MB/s), 8079KiB/s-21.8MiB/s (8273kB/s-22.8MB/s), io=60.1MiB (63.0MB), run=1003-1014msec 00:10:53.733 WRITE: bw=63.0MiB/s (66.0MB/s), 8706KiB/s-21.9MiB/s (8915kB/s-23.0MB/s), io=63.9MiB (67.0MB), run=1003-1014msec 00:10:53.733 00:10:53.733 Disk stats (read/write): 00:10:53.733 nvme0n1: ios=2610/2752, merge=0/0, ticks=55565/47112, in_queue=102677, util=87.86% 00:10:53.733 nvme0n2: ios=4657/4951, merge=0/0, ticks=51731/49212, in_queue=100943, util=88.96% 00:10:53.733 nvme0n3: ios=1599/2048, merge=0/0, ticks=20287/30740, in_queue=51027, util=88.92% 00:10:53.733 nvme0n4: ios=4088/4168, merge=0/0, ticks=52322/48459, in_queue=100781, util=89.68% 00:10:53.733 21:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:53.733 21:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80779 00:10:53.733 21:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:53.733 21:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:53.733 [global] 00:10:53.733 thread=1 00:10:53.733 invalidate=1 00:10:53.733 rw=read 00:10:53.733 time_based=1 00:10:53.733 runtime=10 00:10:53.733 ioengine=libaio 00:10:53.733 direct=1 00:10:53.733 bs=4096 00:10:53.733 iodepth=1 00:10:53.733 norandommap=1 00:10:53.733 numjobs=1 00:10:53.733 00:10:53.733 [job0] 00:10:53.733 filename=/dev/nvme0n1 00:10:53.733 [job1] 00:10:53.733 filename=/dev/nvme0n2 00:10:53.733 [job2] 00:10:53.733 filename=/dev/nvme0n3 00:10:53.733 [job3] 00:10:53.733 filename=/dev/nvme0n4 00:10:53.733 Could not set queue depth (nvme0n1) 00:10:53.733 Could not set queue depth (nvme0n2) 00:10:53.733 Could not set queue depth (nvme0n3) 00:10:53.733 Could not set queue depth (nvme0n4) 00:10:53.733 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.733 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.733 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.733 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.733 fio-3.35 00:10:53.733 Starting 4 threads 00:10:57.023 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:57.023 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40304640, buflen=4096 00:10:57.023 fio: pid=80822, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.023 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:57.023 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44548096, buflen=4096 00:10:57.023 fio: pid=80821, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.023 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.023 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:57.281 fio: pid=80819, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.281 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10747904, buflen=4096 00:10:57.281 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.281 21:51:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:57.538 fio: pid=80820, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:57.538 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13094912, buflen=4096 00:10:57.538 00:10:57.538 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80819: Wed Jul 24 21:51:03 2024 00:10:57.538 read: IOPS=5527, BW=21.6MiB/s (22.6MB/s)(74.2MiB/3439msec) 00:10:57.538 slat (usec): min=10, max=10051, avg=15.10, stdev=132.00 00:10:57.538 clat (usec): min=131, max=2001, avg=164.60, stdev=30.13 00:10:57.538 lat (usec): min=143, max=10276, avg=179.70, stdev=136.30 00:10:57.538 clat percentiles (usec): 00:10:57.538 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:10:57.538 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:10:57.538 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:10:57.538 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 388], 99.95th=[ 562], 00:10:57.538 | 99.99th=[ 1860] 00:10:57.538 bw ( KiB/s): min=21093, max=23088, per=34.52%, avg=22159.50, stdev=646.05, samples=6 00:10:57.538 iops : min= 5273, max= 5772, avg=5539.83, stdev=161.60, samples=6 00:10:57.538 lat (usec) : 250=99.81%, 500=0.11%, 750=0.05%, 1000=0.01% 00:10:57.538 lat (msec) : 2=0.02%, 4=0.01% 00:10:57.538 cpu : usr=1.48%, sys=6.60%, ctx=19017, majf=0, minf=1 00:10:57.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 issued rwts: total=19009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.538 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80820: Wed Jul 24 21:51:03 2024 00:10:57.538 read: IOPS=5299, BW=20.7MiB/s (21.7MB/s)(76.5MiB/3695msec) 00:10:57.538 slat (usec): min=8, max=10837, avg=16.19, stdev=146.57 00:10:57.538 clat (usec): min=130, max=4181, avg=171.27, stdev=64.79 00:10:57.538 lat (usec): min=143, max=11026, avg=187.46, stdev=160.98 00:10:57.538 clat percentiles (usec): 00:10:57.538 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:57.538 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:57.538 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 249], 00:10:57.538 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 857], 99.95th=[ 1221], 00:10:57.538 | 99.99th=[ 3032] 00:10:57.538 bw ( KiB/s): min=17080, max=22952, per=33.10%, avg=21251.29, stdev=2369.63, samples=7 00:10:57.538 iops : min= 4270, max= 5738, avg=5312.71, stdev=592.49, samples=7 00:10:57.538 lat (usec) : 250=95.15%, 500=4.59%, 750=0.12%, 1000=0.06% 00:10:57.538 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:10:57.538 cpu : usr=1.49%, sys=6.17%, ctx=19595, majf=0, minf=1 00:10:57.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 issued rwts: total=19582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.538 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80821: Wed Jul 24 21:51:03 2024 00:10:57.538 read: IOPS=3408, BW=13.3MiB/s (14.0MB/s)(42.5MiB/3191msec) 00:10:57.538 slat (usec): min=8, max=11621, avg=14.91, stdev=131.97 00:10:57.538 clat (usec): min=153, max=2570, avg=277.08, stdev=49.13 00:10:57.538 lat (usec): min=165, max=11878, avg=291.99, stdev=141.51 00:10:57.538 clat percentiles (usec): 00:10:57.538 | 1.00th=[ 180], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 262], 00:10:57.538 | 30.00th=[ 269], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:57.538 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 318], 00:10:57.538 | 99.00th=[ 359], 99.50th=[ 482], 99.90th=[ 881], 99.95th=[ 1205], 00:10:57.538 | 99.99th=[ 1713] 00:10:57.538 bw ( KiB/s): min=12544, max=14056, per=21.14%, avg=13571.50, stdev=600.69, samples=6 00:10:57.538 iops : min= 3136, max= 3514, avg=3392.83, stdev=150.21, samples=6 00:10:57.538 lat (usec) : 250=7.86%, 500=91.69%, 750=0.30%, 1000=0.08% 00:10:57.538 lat (msec) : 2=0.05%, 4=0.01% 00:10:57.538 cpu : usr=1.25%, sys=3.98%, ctx=10882, majf=0, minf=1 00:10:57.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 issued rwts: total=10877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.538 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80822: Wed Jul 24 21:51:03 2024 00:10:57.538 read: IOPS=3374, BW=13.2MiB/s (13.8MB/s)(38.4MiB/2916msec) 00:10:57.538 slat (nsec): min=8079, max=80443, avg=12752.46, stdev=4509.15 00:10:57.538 clat (usec): min=188, max=7935, avg=282.21, stdev=114.34 00:10:57.538 lat (usec): min=203, max=7962, avg=294.96, stdev=114.77 00:10:57.538 clat percentiles (usec): 00:10:57.538 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:10:57.538 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:57.538 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:10:57.538 | 99.00th=[ 367], 99.50th=[ 457], 99.90th=[ 685], 99.95th=[ 1020], 00:10:57.538 | 99.99th=[ 7963] 00:10:57.538 bw ( KiB/s): min=12790, max=14056, per=21.26%, avg=13647.60, stdev=497.06, samples=5 00:10:57.538 iops : min= 3197, max= 3514, avg=3411.80, stdev=124.48, samples=5 00:10:57.538 lat (usec) : 250=1.98%, 500=97.60%, 750=0.33%, 1000=0.03% 00:10:57.538 lat (msec) : 2=0.02%, 4=0.01%, 10=0.02% 00:10:57.538 cpu : usr=0.99%, sys=4.01%, ctx=9842, majf=0, minf=1 00:10:57.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.538 issued rwts: total=9841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.538 00:10:57.538 Run status group 0 (all jobs): 00:10:57.538 READ: bw=62.7MiB/s (65.7MB/s), 13.2MiB/s-21.6MiB/s (13.8MB/s-22.6MB/s), io=232MiB (243MB), run=2916-3695msec 00:10:57.538 00:10:57.538 Disk stats (read/write): 00:10:57.538 nvme0n1: ios=18595/0, merge=0/0, ticks=3081/0, in_queue=3081, util=95.45% 00:10:57.538 nvme0n2: ios=19132/0, merge=0/0, ticks=3318/0, in_queue=3318, util=95.58% 00:10:57.538 nvme0n3: ios=10609/0, merge=0/0, ticks=2889/0, in_queue=2889, util=96.27% 00:10:57.538 nvme0n4: ios=9711/0, merge=0/0, ticks=2652/0, in_queue=2652, util=96.56% 00:10:57.538 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.538 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:57.798 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.798 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:58.057 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.057 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:58.314 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.314 21:51:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:58.572 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.572 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:58.829 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:58.829 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 80779 00:10:58.829 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:58.829 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.086 nvmf hotplug test: fio failed as expected 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:59.086 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.344 rmmod nvme_tcp 00:10:59.344 rmmod nvme_fabrics 00:10:59.344 rmmod nvme_keyring 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 80398 ']' 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 80398 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 80398 ']' 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 80398 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80398 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80398' 00:10:59.344 killing process with pid 80398 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 80398 00:10:59.344 21:51:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 80398 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:59.602 00:10:59.602 real 0m19.306s 00:10:59.602 user 1m12.899s 00:10:59.602 sys 0m10.092s 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:59.602 21:51:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.602 ************************************ 00:10:59.602 END TEST nvmf_fio_target 00:10:59.602 ************************************ 00:10:59.602 21:51:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:59.602 21:51:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:59.603 21:51:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.603 21:51:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.603 ************************************ 00:10:59.603 START TEST nvmf_bdevio 00:10:59.603 ************************************ 00:10:59.603 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:59.860 * Looking for test storage... 00:10:59.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.860 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:59.861 Cannot find device "nvmf_tgt_br" 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.861 Cannot find device "nvmf_tgt_br2" 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:59.861 Cannot find device "nvmf_tgt_br" 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:59.861 Cannot find device "nvmf_tgt_br2" 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:59.861 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:00.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:11:00.119 00:11:00.119 --- 10.0.0.2 ping statistics --- 00:11:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.119 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:00.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:11:00.119 00:11:00.119 --- 10.0.0.3 ping statistics --- 00:11:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.119 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:00.119 00:11:00.119 --- 10.0.0.1 ping statistics --- 00:11:00.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.119 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=81093 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 81093 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 81093 ']' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:00.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:00.119 21:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.119 [2024-07-24 21:51:05.734226] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:11:00.119 [2024-07-24 21:51:05.734307] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.433 [2024-07-24 21:51:05.868452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.433 [2024-07-24 21:51:05.972454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.433 [2024-07-24 21:51:05.972926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.433 [2024-07-24 21:51:05.973289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.433 [2024-07-24 21:51:05.973724] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.433 [2024-07-24 21:51:05.974028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.433 [2024-07-24 21:51:05.974481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.433 [2024-07-24 21:51:05.974631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.433 [2024-07-24 21:51:05.974720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.433 [2024-07-24 21:51:05.974718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.433 [2024-07-24 21:51:06.030783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 [2024-07-24 21:51:06.769472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 Malloc0 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 [2024-07-24 21:51:06.845758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:01.365 { 00:11:01.365 "params": { 00:11:01.365 "name": "Nvme$subsystem", 00:11:01.365 "trtype": "$TEST_TRANSPORT", 00:11:01.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.365 "adrfam": "ipv4", 00:11:01.365 "trsvcid": "$NVMF_PORT", 00:11:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.365 "hdgst": ${hdgst:-false}, 00:11:01.365 "ddgst": ${ddgst:-false} 00:11:01.365 }, 00:11:01.365 "method": "bdev_nvme_attach_controller" 00:11:01.365 } 00:11:01.365 EOF 00:11:01.365 )") 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:01.365 21:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:01.365 "params": { 00:11:01.365 "name": "Nvme1", 00:11:01.365 "trtype": "tcp", 00:11:01.365 "traddr": "10.0.0.2", 00:11:01.365 "adrfam": "ipv4", 00:11:01.365 "trsvcid": "4420", 00:11:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.365 "hdgst": false, 00:11:01.365 "ddgst": false 00:11:01.365 }, 00:11:01.365 "method": "bdev_nvme_attach_controller" 00:11:01.365 }' 00:11:01.365 [2024-07-24 21:51:06.900484] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:11:01.365 [2024-07-24 21:51:06.900575] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81129 ] 00:11:01.365 [2024-07-24 21:51:07.042041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.622 [2024-07-24 21:51:07.149506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.622 [2024-07-24 21:51:07.149662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.622 [2024-07-24 21:51:07.149665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.622 [2024-07-24 21:51:07.218575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:01.622 I/O targets: 00:11:01.622 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.622 00:11:01.622 00:11:01.622 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.622 http://cunit.sourceforge.net/ 00:11:01.622 00:11:01.622 00:11:01.622 Suite: bdevio tests on: Nvme1n1 00:11:01.879 Test: blockdev write read block ...passed 00:11:01.879 Test: blockdev write zeroes read block ...passed 00:11:01.879 Test: blockdev write zeroes read no split ...passed 00:11:01.879 Test: blockdev write zeroes read split ...passed 00:11:01.879 Test: blockdev write zeroes read split partial ...passed 00:11:01.879 Test: blockdev reset ...[2024-07-24 21:51:07.365103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:01.879 [2024-07-24 21:51:07.365208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d23d0 (9): Bad file descriptor 00:11:01.879 passed 00:11:01.879 Test: blockdev write read 8 blocks ...[2024-07-24 21:51:07.382163] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:01.879 passed 00:11:01.879 Test: blockdev write read size > 128k ...passed 00:11:01.879 Test: blockdev write read invalid size ...passed 00:11:01.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.879 Test: blockdev write read max offset ...passed 00:11:01.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.879 Test: blockdev writev readv 8 blocks ...passed 00:11:01.879 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.879 Test: blockdev writev readv block ...passed 00:11:01.879 Test: blockdev writev readv size > 128k ...passed 00:11:01.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.879 Test: blockdev comparev and writev ...[2024-07-24 21:51:07.389810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.389876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.389887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.390868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.879 [2024-07-24 21:51:07.390878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.879 passed 00:11:01.879 Test: blockdev nvme passthru rw ...passed 00:11:01.879 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:51:07.391718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.879 [2024-07-24 21:51:07.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.391862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.879 [2024-07-24 21:51:07.391883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.879 passed 00:11:01.879 Test: blockdev nvme admin passthru ...[2024-07-24 21:51:07.391986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.879 [2024-07-24 21:51:07.392009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.879 [2024-07-24 21:51:07.392104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.879 [2024-07-24 21:51:07.392121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:01.879 passed 00:11:01.879 Test: blockdev copy ...passed 00:11:01.879 00:11:01.879 Run Summary: Type Total Ran Passed Failed Inactive 00:11:01.879 suites 1 1 n/a 0 0 00:11:01.879 tests 23 23 23 0 0 00:11:01.879 asserts 152 152 152 0 n/a 00:11:01.879 00:11:01.879 Elapsed time = 0.149 seconds 00:11:01.879 21:51:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.879 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.880 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.137 rmmod nvme_tcp 00:11:02.137 rmmod nvme_fabrics 00:11:02.137 rmmod nvme_keyring 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 81093 ']' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 81093 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 81093 ']' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 81093 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81093 00:11:02.137 killing process with pid 81093 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81093' 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 81093 00:11:02.137 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 81093 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.395 21:51:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:02.395 00:11:02.395 real 0m2.738s 00:11:02.395 user 0m9.210s 00:11:02.395 sys 0m0.767s 00:11:02.395 ************************************ 00:11:02.395 END TEST nvmf_bdevio 00:11:02.395 ************************************ 00:11:02.395 21:51:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.395 21:51:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.395 21:51:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:02.395 21:51:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:02.395 21:51:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.395 21:51:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.395 ************************************ 00:11:02.395 START TEST nvmf_auth_target 00:11:02.395 ************************************ 00:11:02.395 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:02.395 * Looking for test storage... 00:11:02.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:02.654 Cannot find device "nvmf_tgt_br" 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.654 Cannot find device "nvmf_tgt_br2" 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:02.654 Cannot find device "nvmf_tgt_br" 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:02.654 Cannot find device "nvmf_tgt_br2" 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:02.654 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:02.655 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:02.655 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:02.655 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.655 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:02.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:11:02.913 00:11:02.913 --- 10.0.0.2 ping statistics --- 00:11:02.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.913 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:02.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:02.913 00:11:02.913 --- 10.0.0.3 ping statistics --- 00:11:02.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.913 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:02.913 00:11:02.913 --- 10.0.0.1 ping statistics --- 00:11:02.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.913 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=81307 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 81307 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 81307 ']' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:02.913 21:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=81345 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:03.865 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=73833d648e729b747d08931d5a40f546cbf6cc63efa37350 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rhD 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 73833d648e729b747d08931d5a40f546cbf6cc63efa37350 0 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 73833d648e729b747d08931d5a40f546cbf6cc63efa37350 0 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=73833d648e729b747d08931d5a40f546cbf6cc63efa37350 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rhD 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rhD 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.rhD 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=962741be3dcb44d96126035d1dd6ec515516ad95ece70379b6ba5d9e54ad33dc 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GWb 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 962741be3dcb44d96126035d1dd6ec515516ad95ece70379b6ba5d9e54ad33dc 3 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 962741be3dcb44d96126035d1dd6ec515516ad95ece70379b6ba5d9e54ad33dc 3 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=962741be3dcb44d96126035d1dd6ec515516ad95ece70379b6ba5d9e54ad33dc 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GWb 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GWb 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.GWb 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ca8c9e244f2532fbe17effb25a859a40 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Mav 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ca8c9e244f2532fbe17effb25a859a40 1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ca8c9e244f2532fbe17effb25a859a40 1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ca8c9e244f2532fbe17effb25a859a40 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Mav 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Mav 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Mav 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=66da28e678a293e97dff794b5f343063506e8ecf8076b2f1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.s3W 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 66da28e678a293e97dff794b5f343063506e8ecf8076b2f1 2 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 66da28e678a293e97dff794b5f343063506e8ecf8076b2f1 2 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=66da28e678a293e97dff794b5f343063506e8ecf8076b2f1 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:04.124 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.382 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.s3W 00:11:04.382 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.s3W 00:11:04.382 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.s3W 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b8858e9f3349d8780df7a8a02aa6ca6668855ced020e56d9 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nX1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b8858e9f3349d8780df7a8a02aa6ca6668855ced020e56d9 2 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b8858e9f3349d8780df7a8a02aa6ca6668855ced020e56d9 2 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b8858e9f3349d8780df7a8a02aa6ca6668855ced020e56d9 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nX1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nX1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.nX1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ac32c150e66f49e3a742aad0f86fbe96 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.P5L 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ac32c150e66f49e3a742aad0f86fbe96 1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ac32c150e66f49e3a742aad0f86fbe96 1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ac32c150e66f49e3a742aad0f86fbe96 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:04.383 21:51:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.P5L 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.P5L 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.P5L 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=334cba746e52df03fce2a2769a2ea9c4e1de19f0abb73079280c59b29b4cfdf6 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Aeu 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 334cba746e52df03fce2a2769a2ea9c4e1de19f0abb73079280c59b29b4cfdf6 3 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 334cba746e52df03fce2a2769a2ea9c4e1de19f0abb73079280c59b29b4cfdf6 3 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=334cba746e52df03fce2a2769a2ea9c4e1de19f0abb73079280c59b29b4cfdf6 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Aeu 00:11:04.383 21:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Aeu 00:11:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Aeu 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 81307 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 81307 ']' 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.641 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 81345 /var/tmp/host.sock 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 81345 ']' 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.899 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rhD 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rhD 00:11:05.156 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rhD 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.GWb ]] 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWb 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWb 00:11:05.413 21:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWb 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Mav 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Mav 00:11:05.670 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Mav 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.s3W ]] 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s3W 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s3W 00:11:05.927 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s3W 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nX1 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nX1 00:11:06.185 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nX1 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.P5L ]] 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P5L 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P5L 00:11:06.444 21:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P5L 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Aeu 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Aeu 00:11:06.701 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Aeu 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.970 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.244 21:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.501 00:11:07.501 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.501 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.501 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.772 { 00:11:07.772 "cntlid": 1, 00:11:07.772 "qid": 0, 00:11:07.772 "state": "enabled", 00:11:07.772 "listen_address": { 00:11:07.772 "trtype": "TCP", 00:11:07.772 "adrfam": "IPv4", 00:11:07.772 "traddr": "10.0.0.2", 00:11:07.772 "trsvcid": "4420" 00:11:07.772 }, 00:11:07.772 "peer_address": { 00:11:07.772 "trtype": "TCP", 00:11:07.772 "adrfam": "IPv4", 00:11:07.772 "traddr": "10.0.0.1", 00:11:07.772 "trsvcid": "36448" 00:11:07.772 }, 00:11:07.772 "auth": { 00:11:07.772 "state": "completed", 00:11:07.772 "digest": "sha256", 00:11:07.772 "dhgroup": "null" 00:11:07.772 } 00:11:07.772 } 00:11:07.772 ]' 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.772 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.029 21:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.286 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.286 21:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.545 { 00:11:13.545 "cntlid": 3, 00:11:13.545 "qid": 0, 00:11:13.545 "state": "enabled", 00:11:13.545 "listen_address": { 00:11:13.545 "trtype": "TCP", 00:11:13.545 "adrfam": "IPv4", 00:11:13.545 "traddr": "10.0.0.2", 00:11:13.545 "trsvcid": "4420" 00:11:13.545 }, 00:11:13.545 "peer_address": { 00:11:13.545 "trtype": "TCP", 00:11:13.545 "adrfam": "IPv4", 00:11:13.545 "traddr": "10.0.0.1", 00:11:13.545 "trsvcid": "48826" 00:11:13.545 }, 00:11:13.545 "auth": { 00:11:13.545 "state": "completed", 00:11:13.545 "digest": "sha256", 00:11:13.545 "dhgroup": "null" 00:11:13.545 } 00:11:13.545 } 00:11:13.545 ]' 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.545 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.803 21:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.736 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.302 00:11:15.302 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.302 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.302 21:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.302 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.302 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.302 21:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.302 21:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.302 21:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.560 { 00:11:15.560 "cntlid": 5, 00:11:15.560 "qid": 0, 00:11:15.560 "state": "enabled", 00:11:15.560 "listen_address": { 00:11:15.560 "trtype": "TCP", 00:11:15.560 "adrfam": "IPv4", 00:11:15.560 "traddr": "10.0.0.2", 00:11:15.560 "trsvcid": "4420" 00:11:15.560 }, 00:11:15.560 "peer_address": { 00:11:15.560 "trtype": "TCP", 00:11:15.560 "adrfam": "IPv4", 00:11:15.560 "traddr": "10.0.0.1", 00:11:15.560 "trsvcid": "48860" 00:11:15.560 }, 00:11:15.560 "auth": { 00:11:15.560 "state": "completed", 00:11:15.560 "digest": "sha256", 00:11:15.560 "dhgroup": "null" 00:11:15.560 } 00:11:15.560 } 00:11:15.560 ]' 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.560 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.561 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.561 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.819 21:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:16.753 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.319 00:11:17.319 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.319 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.319 21:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.319 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.319 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.319 21:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.319 21:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.577 { 00:11:17.577 "cntlid": 7, 00:11:17.577 "qid": 0, 00:11:17.577 "state": "enabled", 00:11:17.577 "listen_address": { 00:11:17.577 "trtype": "TCP", 00:11:17.577 "adrfam": "IPv4", 00:11:17.577 "traddr": "10.0.0.2", 00:11:17.577 "trsvcid": "4420" 00:11:17.577 }, 00:11:17.577 "peer_address": { 00:11:17.577 "trtype": "TCP", 00:11:17.577 "adrfam": "IPv4", 00:11:17.577 "traddr": "10.0.0.1", 00:11:17.577 "trsvcid": "48890" 00:11:17.577 }, 00:11:17.577 "auth": { 00:11:17.577 "state": "completed", 00:11:17.577 "digest": "sha256", 00:11:17.577 "dhgroup": "null" 00:11:17.577 } 00:11:17.577 } 00:11:17.577 ]' 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.577 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.834 21:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.769 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.336 00:11:19.336 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.336 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.336 21:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.336 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.336 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.336 21:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.336 21:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.595 { 00:11:19.595 "cntlid": 9, 00:11:19.595 "qid": 0, 00:11:19.595 "state": "enabled", 00:11:19.595 "listen_address": { 00:11:19.595 "trtype": "TCP", 00:11:19.595 "adrfam": "IPv4", 00:11:19.595 "traddr": "10.0.0.2", 00:11:19.595 "trsvcid": "4420" 00:11:19.595 }, 00:11:19.595 "peer_address": { 00:11:19.595 "trtype": "TCP", 00:11:19.595 "adrfam": "IPv4", 00:11:19.595 "traddr": "10.0.0.1", 00:11:19.595 "trsvcid": "48920" 00:11:19.595 }, 00:11:19.595 "auth": { 00:11:19.595 "state": "completed", 00:11:19.595 "digest": "sha256", 00:11:19.595 "dhgroup": "ffdhe2048" 00:11:19.595 } 00:11:19.595 } 00:11:19.595 ]' 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.595 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.854 21:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.791 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.359 00:11:21.359 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.359 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.359 21:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.617 { 00:11:21.617 "cntlid": 11, 00:11:21.617 "qid": 0, 00:11:21.617 "state": "enabled", 00:11:21.617 "listen_address": { 00:11:21.617 "trtype": "TCP", 00:11:21.617 "adrfam": "IPv4", 00:11:21.617 "traddr": "10.0.0.2", 00:11:21.617 "trsvcid": "4420" 00:11:21.617 }, 00:11:21.617 "peer_address": { 00:11:21.617 "trtype": "TCP", 00:11:21.617 "adrfam": "IPv4", 00:11:21.617 "traddr": "10.0.0.1", 00:11:21.617 "trsvcid": "48938" 00:11:21.617 }, 00:11:21.617 "auth": { 00:11:21.617 "state": "completed", 00:11:21.617 "digest": "sha256", 00:11:21.617 "dhgroup": "ffdhe2048" 00:11:21.617 } 00:11:21.617 } 00:11:21.617 ]' 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.617 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.618 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.185 21:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.754 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.015 21:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.274 00:11:23.532 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.532 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.532 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.791 { 00:11:23.791 "cntlid": 13, 00:11:23.791 "qid": 0, 00:11:23.791 "state": "enabled", 00:11:23.791 "listen_address": { 00:11:23.791 "trtype": "TCP", 00:11:23.791 "adrfam": "IPv4", 00:11:23.791 "traddr": "10.0.0.2", 00:11:23.791 "trsvcid": "4420" 00:11:23.791 }, 00:11:23.791 "peer_address": { 00:11:23.791 "trtype": "TCP", 00:11:23.791 "adrfam": "IPv4", 00:11:23.791 "traddr": "10.0.0.1", 00:11:23.791 "trsvcid": "43610" 00:11:23.791 }, 00:11:23.791 "auth": { 00:11:23.791 "state": "completed", 00:11:23.791 "digest": "sha256", 00:11:23.791 "dhgroup": "ffdhe2048" 00:11:23.791 } 00:11:23.791 } 00:11:23.791 ]' 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.791 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.049 21:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:24.985 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.244 21:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.501 00:11:25.501 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.501 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.501 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.758 { 00:11:25.758 "cntlid": 15, 00:11:25.758 "qid": 0, 00:11:25.758 "state": "enabled", 00:11:25.758 "listen_address": { 00:11:25.758 "trtype": "TCP", 00:11:25.758 "adrfam": "IPv4", 00:11:25.758 "traddr": "10.0.0.2", 00:11:25.758 "trsvcid": "4420" 00:11:25.758 }, 00:11:25.758 "peer_address": { 00:11:25.758 "trtype": "TCP", 00:11:25.758 "adrfam": "IPv4", 00:11:25.758 "traddr": "10.0.0.1", 00:11:25.758 "trsvcid": "43634" 00:11:25.758 }, 00:11:25.758 "auth": { 00:11:25.758 "state": "completed", 00:11:25.758 "digest": "sha256", 00:11:25.758 "dhgroup": "ffdhe2048" 00:11:25.758 } 00:11:25.758 } 00:11:25.758 ]' 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.758 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.017 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.017 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.017 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.017 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.017 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.275 21:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.841 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.842 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.099 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:27.099 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.099 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:27.099 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.100 21:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.666 00:11:27.666 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.666 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.666 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.924 { 00:11:27.924 "cntlid": 17, 00:11:27.924 "qid": 0, 00:11:27.924 "state": "enabled", 00:11:27.924 "listen_address": { 00:11:27.924 "trtype": "TCP", 00:11:27.924 "adrfam": "IPv4", 00:11:27.924 "traddr": "10.0.0.2", 00:11:27.924 "trsvcid": "4420" 00:11:27.924 }, 00:11:27.924 "peer_address": { 00:11:27.924 "trtype": "TCP", 00:11:27.924 "adrfam": "IPv4", 00:11:27.924 "traddr": "10.0.0.1", 00:11:27.924 "trsvcid": "43666" 00:11:27.924 }, 00:11:27.924 "auth": { 00:11:27.924 "state": "completed", 00:11:27.924 "digest": "sha256", 00:11:27.924 "dhgroup": "ffdhe3072" 00:11:27.924 } 00:11:27.924 } 00:11:27.924 ]' 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.924 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.185 21:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.136 21:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.394 00:11:29.394 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.394 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.394 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.959 { 00:11:29.959 "cntlid": 19, 00:11:29.959 "qid": 0, 00:11:29.959 "state": "enabled", 00:11:29.959 "listen_address": { 00:11:29.959 "trtype": "TCP", 00:11:29.959 "adrfam": "IPv4", 00:11:29.959 "traddr": "10.0.0.2", 00:11:29.959 "trsvcid": "4420" 00:11:29.959 }, 00:11:29.959 "peer_address": { 00:11:29.959 "trtype": "TCP", 00:11:29.959 "adrfam": "IPv4", 00:11:29.959 "traddr": "10.0.0.1", 00:11:29.959 "trsvcid": "43688" 00:11:29.959 }, 00:11:29.959 "auth": { 00:11:29.959 "state": "completed", 00:11:29.959 "digest": "sha256", 00:11:29.959 "dhgroup": "ffdhe3072" 00:11:29.959 } 00:11:29.959 } 00:11:29.959 ]' 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.959 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.216 21:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.150 21:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.716 00:11:31.716 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.716 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.716 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.974 { 00:11:31.974 "cntlid": 21, 00:11:31.974 "qid": 0, 00:11:31.974 "state": "enabled", 00:11:31.974 "listen_address": { 00:11:31.974 "trtype": "TCP", 00:11:31.974 "adrfam": "IPv4", 00:11:31.974 "traddr": "10.0.0.2", 00:11:31.974 "trsvcid": "4420" 00:11:31.974 }, 00:11:31.974 "peer_address": { 00:11:31.974 "trtype": "TCP", 00:11:31.974 "adrfam": "IPv4", 00:11:31.974 "traddr": "10.0.0.1", 00:11:31.974 "trsvcid": "59850" 00:11:31.974 }, 00:11:31.974 "auth": { 00:11:31.974 "state": "completed", 00:11:31.974 "digest": "sha256", 00:11:31.974 "dhgroup": "ffdhe3072" 00:11:31.974 } 00:11:31.974 } 00:11:31.974 ]' 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.974 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.232 21:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:32.797 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.094 21:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.366 00:11:33.366 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.366 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.366 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.931 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.931 { 00:11:33.931 "cntlid": 23, 00:11:33.931 "qid": 0, 00:11:33.931 "state": "enabled", 00:11:33.931 "listen_address": { 00:11:33.931 "trtype": "TCP", 00:11:33.931 "adrfam": "IPv4", 00:11:33.931 "traddr": "10.0.0.2", 00:11:33.931 "trsvcid": "4420" 00:11:33.931 }, 00:11:33.931 "peer_address": { 00:11:33.931 "trtype": "TCP", 00:11:33.931 "adrfam": "IPv4", 00:11:33.931 "traddr": "10.0.0.1", 00:11:33.931 "trsvcid": "59884" 00:11:33.931 }, 00:11:33.931 "auth": { 00:11:33.931 "state": "completed", 00:11:33.932 "digest": "sha256", 00:11:33.932 "dhgroup": "ffdhe3072" 00:11:33.932 } 00:11:33.932 } 00:11:33.932 ]' 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.932 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.188 21:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.120 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.121 21:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.685 00:11:35.685 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.685 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.685 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.942 { 00:11:35.942 "cntlid": 25, 00:11:35.942 "qid": 0, 00:11:35.942 "state": "enabled", 00:11:35.942 "listen_address": { 00:11:35.942 "trtype": "TCP", 00:11:35.942 "adrfam": "IPv4", 00:11:35.942 "traddr": "10.0.0.2", 00:11:35.942 "trsvcid": "4420" 00:11:35.942 }, 00:11:35.942 "peer_address": { 00:11:35.942 "trtype": "TCP", 00:11:35.942 "adrfam": "IPv4", 00:11:35.942 "traddr": "10.0.0.1", 00:11:35.942 "trsvcid": "59916" 00:11:35.942 }, 00:11:35.942 "auth": { 00:11:35.942 "state": "completed", 00:11:35.942 "digest": "sha256", 00:11:35.942 "dhgroup": "ffdhe4096" 00:11:35.942 } 00:11:35.942 } 00:11:35.942 ]' 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.942 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.200 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.200 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.200 21:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.458 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.025 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.282 21:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.848 00:11:37.848 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.848 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.848 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.107 { 00:11:38.107 "cntlid": 27, 00:11:38.107 "qid": 0, 00:11:38.107 "state": "enabled", 00:11:38.107 "listen_address": { 00:11:38.107 "trtype": "TCP", 00:11:38.107 "adrfam": "IPv4", 00:11:38.107 "traddr": "10.0.0.2", 00:11:38.107 "trsvcid": "4420" 00:11:38.107 }, 00:11:38.107 "peer_address": { 00:11:38.107 "trtype": "TCP", 00:11:38.107 "adrfam": "IPv4", 00:11:38.107 "traddr": "10.0.0.1", 00:11:38.107 "trsvcid": "59940" 00:11:38.107 }, 00:11:38.107 "auth": { 00:11:38.107 "state": "completed", 00:11:38.107 "digest": "sha256", 00:11:38.107 "dhgroup": "ffdhe4096" 00:11:38.107 } 00:11:38.107 } 00:11:38.107 ]' 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.107 21:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.368 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.302 21:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.869 00:11:39.869 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.869 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.869 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.128 { 00:11:40.128 "cntlid": 29, 00:11:40.128 "qid": 0, 00:11:40.128 "state": "enabled", 00:11:40.128 "listen_address": { 00:11:40.128 "trtype": "TCP", 00:11:40.128 "adrfam": "IPv4", 00:11:40.128 "traddr": "10.0.0.2", 00:11:40.128 "trsvcid": "4420" 00:11:40.128 }, 00:11:40.128 "peer_address": { 00:11:40.128 "trtype": "TCP", 00:11:40.128 "adrfam": "IPv4", 00:11:40.128 "traddr": "10.0.0.1", 00:11:40.128 "trsvcid": "59972" 00:11:40.128 }, 00:11:40.128 "auth": { 00:11:40.128 "state": "completed", 00:11:40.128 "digest": "sha256", 00:11:40.128 "dhgroup": "ffdhe4096" 00:11:40.128 } 00:11:40.128 } 00:11:40.128 ]' 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.128 21:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.386 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.321 21:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.580 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.838 00:11:41.838 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.838 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.838 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.096 { 00:11:42.096 "cntlid": 31, 00:11:42.096 "qid": 0, 00:11:42.096 "state": "enabled", 00:11:42.096 "listen_address": { 00:11:42.096 "trtype": "TCP", 00:11:42.096 "adrfam": "IPv4", 00:11:42.096 "traddr": "10.0.0.2", 00:11:42.096 "trsvcid": "4420" 00:11:42.096 }, 00:11:42.096 "peer_address": { 00:11:42.096 "trtype": "TCP", 00:11:42.096 "adrfam": "IPv4", 00:11:42.096 "traddr": "10.0.0.1", 00:11:42.096 "trsvcid": "41436" 00:11:42.096 }, 00:11:42.096 "auth": { 00:11:42.096 "state": "completed", 00:11:42.096 "digest": "sha256", 00:11:42.096 "dhgroup": "ffdhe4096" 00:11:42.096 } 00:11:42.096 } 00:11:42.096 ]' 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.096 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.354 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.354 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.354 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.354 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.354 21:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.612 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.221 21:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.480 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.046 00:11:44.046 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.046 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.046 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.304 { 00:11:44.304 "cntlid": 33, 00:11:44.304 "qid": 0, 00:11:44.304 "state": "enabled", 00:11:44.304 "listen_address": { 00:11:44.304 "trtype": "TCP", 00:11:44.304 "adrfam": "IPv4", 00:11:44.304 "traddr": "10.0.0.2", 00:11:44.304 "trsvcid": "4420" 00:11:44.304 }, 00:11:44.304 "peer_address": { 00:11:44.304 "trtype": "TCP", 00:11:44.304 "adrfam": "IPv4", 00:11:44.304 "traddr": "10.0.0.1", 00:11:44.304 "trsvcid": "41468" 00:11:44.304 }, 00:11:44.304 "auth": { 00:11:44.304 "state": "completed", 00:11:44.304 "digest": "sha256", 00:11:44.304 "dhgroup": "ffdhe6144" 00:11:44.304 } 00:11:44.304 } 00:11:44.304 ]' 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.304 21:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.562 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.562 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.562 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.562 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.495 21:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.495 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.060 00:11:46.060 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.060 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.060 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.318 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.318 21:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.318 21:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.318 21:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.318 21:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.318 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.318 { 00:11:46.318 "cntlid": 35, 00:11:46.318 "qid": 0, 00:11:46.318 "state": "enabled", 00:11:46.318 "listen_address": { 00:11:46.318 "trtype": "TCP", 00:11:46.318 "adrfam": "IPv4", 00:11:46.318 "traddr": "10.0.0.2", 00:11:46.318 "trsvcid": "4420" 00:11:46.318 }, 00:11:46.318 "peer_address": { 00:11:46.318 "trtype": "TCP", 00:11:46.318 "adrfam": "IPv4", 00:11:46.318 "traddr": "10.0.0.1", 00:11:46.318 "trsvcid": "41484" 00:11:46.318 }, 00:11:46.318 "auth": { 00:11:46.318 "state": "completed", 00:11:46.318 "digest": "sha256", 00:11:46.318 "dhgroup": "ffdhe6144" 00:11:46.318 } 00:11:46.318 } 00:11:46.318 ]' 00:11:46.318 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.576 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.833 21:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.400 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.657 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.221 00:11:48.221 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.221 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.221 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.479 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.479 21:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.479 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.479 21:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.479 { 00:11:48.479 "cntlid": 37, 00:11:48.479 "qid": 0, 00:11:48.479 "state": "enabled", 00:11:48.479 "listen_address": { 00:11:48.479 "trtype": "TCP", 00:11:48.479 "adrfam": "IPv4", 00:11:48.479 "traddr": "10.0.0.2", 00:11:48.479 "trsvcid": "4420" 00:11:48.479 }, 00:11:48.479 "peer_address": { 00:11:48.479 "trtype": "TCP", 00:11:48.479 "adrfam": "IPv4", 00:11:48.479 "traddr": "10.0.0.1", 00:11:48.479 "trsvcid": "41504" 00:11:48.479 }, 00:11:48.479 "auth": { 00:11:48.479 "state": "completed", 00:11:48.479 "digest": "sha256", 00:11:48.479 "dhgroup": "ffdhe6144" 00:11:48.479 } 00:11:48.479 } 00:11:48.479 ]' 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.479 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.736 21:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.668 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.926 21:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.926 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.926 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.183 00:11:50.183 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.183 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.183 21:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.440 { 00:11:50.440 "cntlid": 39, 00:11:50.440 "qid": 0, 00:11:50.440 "state": "enabled", 00:11:50.440 "listen_address": { 00:11:50.440 "trtype": "TCP", 00:11:50.440 "adrfam": "IPv4", 00:11:50.440 "traddr": "10.0.0.2", 00:11:50.440 "trsvcid": "4420" 00:11:50.440 }, 00:11:50.440 "peer_address": { 00:11:50.440 "trtype": "TCP", 00:11:50.440 "adrfam": "IPv4", 00:11:50.440 "traddr": "10.0.0.1", 00:11:50.440 "trsvcid": "41546" 00:11:50.440 }, 00:11:50.440 "auth": { 00:11:50.440 "state": "completed", 00:11:50.440 "digest": "sha256", 00:11:50.440 "dhgroup": "ffdhe6144" 00:11:50.440 } 00:11:50.440 } 00:11:50.440 ]' 00:11:50.440 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.697 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.955 21:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.519 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.777 21:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.344 00:11:52.601 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.601 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.601 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.863 { 00:11:52.863 "cntlid": 41, 00:11:52.863 "qid": 0, 00:11:52.863 "state": "enabled", 00:11:52.863 "listen_address": { 00:11:52.863 "trtype": "TCP", 00:11:52.863 "adrfam": "IPv4", 00:11:52.863 "traddr": "10.0.0.2", 00:11:52.863 "trsvcid": "4420" 00:11:52.863 }, 00:11:52.863 "peer_address": { 00:11:52.863 "trtype": "TCP", 00:11:52.863 "adrfam": "IPv4", 00:11:52.863 "traddr": "10.0.0.1", 00:11:52.863 "trsvcid": "42614" 00:11:52.863 }, 00:11:52.863 "auth": { 00:11:52.863 "state": "completed", 00:11:52.863 "digest": "sha256", 00:11:52.863 "dhgroup": "ffdhe8192" 00:11:52.863 } 00:11:52.863 } 00:11:52.863 ]' 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.863 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.120 21:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.053 21:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.619 00:11:54.877 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.877 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.877 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.135 { 00:11:55.135 "cntlid": 43, 00:11:55.135 "qid": 0, 00:11:55.135 "state": "enabled", 00:11:55.135 "listen_address": { 00:11:55.135 "trtype": "TCP", 00:11:55.135 "adrfam": "IPv4", 00:11:55.135 "traddr": "10.0.0.2", 00:11:55.135 "trsvcid": "4420" 00:11:55.135 }, 00:11:55.135 "peer_address": { 00:11:55.135 "trtype": "TCP", 00:11:55.135 "adrfam": "IPv4", 00:11:55.135 "traddr": "10.0.0.1", 00:11:55.135 "trsvcid": "42632" 00:11:55.135 }, 00:11:55.135 "auth": { 00:11:55.135 "state": "completed", 00:11:55.135 "digest": "sha256", 00:11:55.135 "dhgroup": "ffdhe8192" 00:11:55.135 } 00:11:55.135 } 00:11:55.135 ]' 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.135 21:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.446 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.382 21:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.382 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.947 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.204 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.204 { 00:11:57.204 "cntlid": 45, 00:11:57.204 "qid": 0, 00:11:57.204 "state": "enabled", 00:11:57.204 "listen_address": { 00:11:57.204 "trtype": "TCP", 00:11:57.204 "adrfam": "IPv4", 00:11:57.204 "traddr": "10.0.0.2", 00:11:57.204 "trsvcid": "4420" 00:11:57.204 }, 00:11:57.204 "peer_address": { 00:11:57.204 "trtype": "TCP", 00:11:57.204 "adrfam": "IPv4", 00:11:57.204 "traddr": "10.0.0.1", 00:11:57.204 "trsvcid": "42650" 00:11:57.204 }, 00:11:57.204 "auth": { 00:11:57.204 "state": "completed", 00:11:57.204 "digest": "sha256", 00:11:57.204 "dhgroup": "ffdhe8192" 00:11:57.204 } 00:11:57.204 } 00:11:57.204 ]' 00:11:57.462 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.462 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.462 21:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.462 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.462 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.462 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.462 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.462 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.720 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.284 21:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.541 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.542 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.106 00:11:59.106 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.106 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.106 21:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.363 { 00:11:59.363 "cntlid": 47, 00:11:59.363 "qid": 0, 00:11:59.363 "state": "enabled", 00:11:59.363 "listen_address": { 00:11:59.363 "trtype": "TCP", 00:11:59.363 "adrfam": "IPv4", 00:11:59.363 "traddr": "10.0.0.2", 00:11:59.363 "trsvcid": "4420" 00:11:59.363 }, 00:11:59.363 "peer_address": { 00:11:59.363 "trtype": "TCP", 00:11:59.363 "adrfam": "IPv4", 00:11:59.363 "traddr": "10.0.0.1", 00:11:59.363 "trsvcid": "42676" 00:11:59.363 }, 00:11:59.363 "auth": { 00:11:59.363 "state": "completed", 00:11:59.363 "digest": "sha256", 00:11:59.363 "dhgroup": "ffdhe8192" 00:11:59.363 } 00:11:59.363 } 00:11:59.363 ]' 00:11:59.363 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.621 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.879 21:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:00.444 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.713 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.984 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.242 00:12:01.242 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.242 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.242 21:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.500 { 00:12:01.500 "cntlid": 49, 00:12:01.500 "qid": 0, 00:12:01.500 "state": "enabled", 00:12:01.500 "listen_address": { 00:12:01.500 "trtype": "TCP", 00:12:01.500 "adrfam": "IPv4", 00:12:01.500 "traddr": "10.0.0.2", 00:12:01.500 "trsvcid": "4420" 00:12:01.500 }, 00:12:01.500 "peer_address": { 00:12:01.500 "trtype": "TCP", 00:12:01.500 "adrfam": "IPv4", 00:12:01.500 "traddr": "10.0.0.1", 00:12:01.500 "trsvcid": "42694" 00:12:01.500 }, 00:12:01.500 "auth": { 00:12:01.500 "state": "completed", 00:12:01.500 "digest": "sha384", 00:12:01.500 "dhgroup": "null" 00:12:01.500 } 00:12:01.500 } 00:12:01.500 ]' 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:01.500 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.757 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.757 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.757 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.014 21:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.578 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.141 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.142 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.142 21:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.142 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.142 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.399 00:12:03.399 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.399 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.399 21:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.656 { 00:12:03.656 "cntlid": 51, 00:12:03.656 "qid": 0, 00:12:03.656 "state": "enabled", 00:12:03.656 "listen_address": { 00:12:03.656 "trtype": "TCP", 00:12:03.656 "adrfam": "IPv4", 00:12:03.656 "traddr": "10.0.0.2", 00:12:03.656 "trsvcid": "4420" 00:12:03.656 }, 00:12:03.656 "peer_address": { 00:12:03.656 "trtype": "TCP", 00:12:03.656 "adrfam": "IPv4", 00:12:03.656 "traddr": "10.0.0.1", 00:12:03.656 "trsvcid": "35604" 00:12:03.656 }, 00:12:03.656 "auth": { 00:12:03.656 "state": "completed", 00:12:03.656 "digest": "sha384", 00:12:03.656 "dhgroup": "null" 00:12:03.656 } 00:12:03.656 } 00:12:03.656 ]' 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:03.656 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.913 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.913 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.913 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.170 21:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.733 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.991 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.249 00:12:05.249 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.249 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.249 21:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.507 { 00:12:05.507 "cntlid": 53, 00:12:05.507 "qid": 0, 00:12:05.507 "state": "enabled", 00:12:05.507 "listen_address": { 00:12:05.507 "trtype": "TCP", 00:12:05.507 "adrfam": "IPv4", 00:12:05.507 "traddr": "10.0.0.2", 00:12:05.507 "trsvcid": "4420" 00:12:05.507 }, 00:12:05.507 "peer_address": { 00:12:05.507 "trtype": "TCP", 00:12:05.507 "adrfam": "IPv4", 00:12:05.507 "traddr": "10.0.0.1", 00:12:05.507 "trsvcid": "35640" 00:12:05.507 }, 00:12:05.507 "auth": { 00:12:05.507 "state": "completed", 00:12:05.507 "digest": "sha384", 00:12:05.507 "dhgroup": "null" 00:12:05.507 } 00:12:05.507 } 00:12:05.507 ]' 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.507 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.765 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:05.765 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.765 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.765 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.765 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.023 21:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.619 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:06.878 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.136 00:12:07.394 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.394 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.394 21:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.651 { 00:12:07.651 "cntlid": 55, 00:12:07.651 "qid": 0, 00:12:07.651 "state": "enabled", 00:12:07.651 "listen_address": { 00:12:07.651 "trtype": "TCP", 00:12:07.651 "adrfam": "IPv4", 00:12:07.651 "traddr": "10.0.0.2", 00:12:07.651 "trsvcid": "4420" 00:12:07.651 }, 00:12:07.651 "peer_address": { 00:12:07.651 "trtype": "TCP", 00:12:07.651 "adrfam": "IPv4", 00:12:07.651 "traddr": "10.0.0.1", 00:12:07.651 "trsvcid": "35660" 00:12:07.651 }, 00:12:07.651 "auth": { 00:12:07.651 "state": "completed", 00:12:07.651 "digest": "sha384", 00:12:07.651 "dhgroup": "null" 00:12:07.651 } 00:12:07.651 } 00:12:07.651 ]' 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.651 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.910 21:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.843 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.102 21:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.360 00:12:09.360 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.360 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.360 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.618 { 00:12:09.618 "cntlid": 57, 00:12:09.618 "qid": 0, 00:12:09.618 "state": "enabled", 00:12:09.618 "listen_address": { 00:12:09.618 "trtype": "TCP", 00:12:09.618 "adrfam": "IPv4", 00:12:09.618 "traddr": "10.0.0.2", 00:12:09.618 "trsvcid": "4420" 00:12:09.618 }, 00:12:09.618 "peer_address": { 00:12:09.618 "trtype": "TCP", 00:12:09.618 "adrfam": "IPv4", 00:12:09.618 "traddr": "10.0.0.1", 00:12:09.618 "trsvcid": "35698" 00:12:09.618 }, 00:12:09.618 "auth": { 00:12:09.618 "state": "completed", 00:12:09.618 "digest": "sha384", 00:12:09.618 "dhgroup": "ffdhe2048" 00:12:09.618 } 00:12:09.618 } 00:12:09.618 ]' 00:12:09.618 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.876 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.876 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.876 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.876 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.877 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.877 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.877 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.135 21:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.070 21:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.327 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.585 21:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.843 21:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.843 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.843 { 00:12:11.843 "cntlid": 59, 00:12:11.843 "qid": 0, 00:12:11.843 "state": "enabled", 00:12:11.843 "listen_address": { 00:12:11.843 "trtype": "TCP", 00:12:11.843 "adrfam": "IPv4", 00:12:11.843 "traddr": "10.0.0.2", 00:12:11.843 "trsvcid": "4420" 00:12:11.843 }, 00:12:11.843 "peer_address": { 00:12:11.844 "trtype": "TCP", 00:12:11.844 "adrfam": "IPv4", 00:12:11.844 "traddr": "10.0.0.1", 00:12:11.844 "trsvcid": "42106" 00:12:11.844 }, 00:12:11.844 "auth": { 00:12:11.844 "state": "completed", 00:12:11.844 "digest": "sha384", 00:12:11.844 "dhgroup": "ffdhe2048" 00:12:11.844 } 00:12:11.844 } 00:12:11.844 ]' 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.844 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.102 21:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.669 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.236 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.495 00:12:13.495 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.495 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.495 21:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.753 { 00:12:13.753 "cntlid": 61, 00:12:13.753 "qid": 0, 00:12:13.753 "state": "enabled", 00:12:13.753 "listen_address": { 00:12:13.753 "trtype": "TCP", 00:12:13.753 "adrfam": "IPv4", 00:12:13.753 "traddr": "10.0.0.2", 00:12:13.753 "trsvcid": "4420" 00:12:13.753 }, 00:12:13.753 "peer_address": { 00:12:13.753 "trtype": "TCP", 00:12:13.753 "adrfam": "IPv4", 00:12:13.753 "traddr": "10.0.0.1", 00:12:13.753 "trsvcid": "42134" 00:12:13.753 }, 00:12:13.753 "auth": { 00:12:13.753 "state": "completed", 00:12:13.753 "digest": "sha384", 00:12:13.753 "dhgroup": "ffdhe2048" 00:12:13.753 } 00:12:13.753 } 00:12:13.753 ]' 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.753 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.011 21:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.946 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.513 00:12:15.513 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.513 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.513 21:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.771 { 00:12:15.771 "cntlid": 63, 00:12:15.771 "qid": 0, 00:12:15.771 "state": "enabled", 00:12:15.771 "listen_address": { 00:12:15.771 "trtype": "TCP", 00:12:15.771 "adrfam": "IPv4", 00:12:15.771 "traddr": "10.0.0.2", 00:12:15.771 "trsvcid": "4420" 00:12:15.771 }, 00:12:15.771 "peer_address": { 00:12:15.771 "trtype": "TCP", 00:12:15.771 "adrfam": "IPv4", 00:12:15.771 "traddr": "10.0.0.1", 00:12:15.771 "trsvcid": "42168" 00:12:15.771 }, 00:12:15.771 "auth": { 00:12:15.771 "state": "completed", 00:12:15.771 "digest": "sha384", 00:12:15.771 "dhgroup": "ffdhe2048" 00:12:15.771 } 00:12:15.771 } 00:12:15.771 ]' 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.771 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.030 21:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.967 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.226 21:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.226 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.226 21:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.484 00:12:17.484 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.484 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.484 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.742 { 00:12:17.742 "cntlid": 65, 00:12:17.742 "qid": 0, 00:12:17.742 "state": "enabled", 00:12:17.742 "listen_address": { 00:12:17.742 "trtype": "TCP", 00:12:17.742 "adrfam": "IPv4", 00:12:17.742 "traddr": "10.0.0.2", 00:12:17.742 "trsvcid": "4420" 00:12:17.742 }, 00:12:17.742 "peer_address": { 00:12:17.742 "trtype": "TCP", 00:12:17.742 "adrfam": "IPv4", 00:12:17.742 "traddr": "10.0.0.1", 00:12:17.742 "trsvcid": "42194" 00:12:17.742 }, 00:12:17.742 "auth": { 00:12:17.742 "state": "completed", 00:12:17.742 "digest": "sha384", 00:12:17.742 "dhgroup": "ffdhe3072" 00:12:17.742 } 00:12:17.742 } 00:12:17.742 ]' 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.742 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.000 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.000 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.000 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.000 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.000 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.259 21:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:18.827 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.085 21:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.343 00:12:19.343 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.343 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.344 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.601 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.601 { 00:12:19.601 "cntlid": 67, 00:12:19.601 "qid": 0, 00:12:19.601 "state": "enabled", 00:12:19.601 "listen_address": { 00:12:19.601 "trtype": "TCP", 00:12:19.601 "adrfam": "IPv4", 00:12:19.601 "traddr": "10.0.0.2", 00:12:19.601 "trsvcid": "4420" 00:12:19.601 }, 00:12:19.602 "peer_address": { 00:12:19.602 "trtype": "TCP", 00:12:19.602 "adrfam": "IPv4", 00:12:19.602 "traddr": "10.0.0.1", 00:12:19.602 "trsvcid": "42210" 00:12:19.602 }, 00:12:19.602 "auth": { 00:12:19.602 "state": "completed", 00:12:19.602 "digest": "sha384", 00:12:19.602 "dhgroup": "ffdhe3072" 00:12:19.602 } 00:12:19.602 } 00:12:19.602 ]' 00:12:19.602 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.860 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.118 21:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.727 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.986 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.243 00:12:21.243 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.243 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.243 21:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.808 { 00:12:21.808 "cntlid": 69, 00:12:21.808 "qid": 0, 00:12:21.808 "state": "enabled", 00:12:21.808 "listen_address": { 00:12:21.808 "trtype": "TCP", 00:12:21.808 "adrfam": "IPv4", 00:12:21.808 "traddr": "10.0.0.2", 00:12:21.808 "trsvcid": "4420" 00:12:21.808 }, 00:12:21.808 "peer_address": { 00:12:21.808 "trtype": "TCP", 00:12:21.808 "adrfam": "IPv4", 00:12:21.808 "traddr": "10.0.0.1", 00:12:21.808 "trsvcid": "42228" 00:12:21.808 }, 00:12:21.808 "auth": { 00:12:21.808 "state": "completed", 00:12:21.808 "digest": "sha384", 00:12:21.808 "dhgroup": "ffdhe3072" 00:12:21.808 } 00:12:21.808 } 00:12:21.808 ]' 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.808 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.066 21:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:22.633 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.891 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.457 00:12:23.457 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.457 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.457 21:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.715 { 00:12:23.715 "cntlid": 71, 00:12:23.715 "qid": 0, 00:12:23.715 "state": "enabled", 00:12:23.715 "listen_address": { 00:12:23.715 "trtype": "TCP", 00:12:23.715 "adrfam": "IPv4", 00:12:23.715 "traddr": "10.0.0.2", 00:12:23.715 "trsvcid": "4420" 00:12:23.715 }, 00:12:23.715 "peer_address": { 00:12:23.715 "trtype": "TCP", 00:12:23.715 "adrfam": "IPv4", 00:12:23.715 "traddr": "10.0.0.1", 00:12:23.715 "trsvcid": "49406" 00:12:23.715 }, 00:12:23.715 "auth": { 00:12:23.715 "state": "completed", 00:12:23.715 "digest": "sha384", 00:12:23.715 "dhgroup": "ffdhe3072" 00:12:23.715 } 00:12:23.715 } 00:12:23.715 ]' 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.715 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.298 21:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:24.864 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.122 21:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.380 00:12:25.381 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.381 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.381 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.639 { 00:12:25.639 "cntlid": 73, 00:12:25.639 "qid": 0, 00:12:25.639 "state": "enabled", 00:12:25.639 "listen_address": { 00:12:25.639 "trtype": "TCP", 00:12:25.639 "adrfam": "IPv4", 00:12:25.639 "traddr": "10.0.0.2", 00:12:25.639 "trsvcid": "4420" 00:12:25.639 }, 00:12:25.639 "peer_address": { 00:12:25.639 "trtype": "TCP", 00:12:25.639 "adrfam": "IPv4", 00:12:25.639 "traddr": "10.0.0.1", 00:12:25.639 "trsvcid": "49424" 00:12:25.639 }, 00:12:25.639 "auth": { 00:12:25.639 "state": "completed", 00:12:25.639 "digest": "sha384", 00:12:25.639 "dhgroup": "ffdhe4096" 00:12:25.639 } 00:12:25.639 } 00:12:25.639 ]' 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.639 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.897 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.897 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.897 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.897 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.156 21:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.722 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.980 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.237 00:12:27.237 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.237 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.237 21:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.803 { 00:12:27.803 "cntlid": 75, 00:12:27.803 "qid": 0, 00:12:27.803 "state": "enabled", 00:12:27.803 "listen_address": { 00:12:27.803 "trtype": "TCP", 00:12:27.803 "adrfam": "IPv4", 00:12:27.803 "traddr": "10.0.0.2", 00:12:27.803 "trsvcid": "4420" 00:12:27.803 }, 00:12:27.803 "peer_address": { 00:12:27.803 "trtype": "TCP", 00:12:27.803 "adrfam": "IPv4", 00:12:27.803 "traddr": "10.0.0.1", 00:12:27.803 "trsvcid": "49464" 00:12:27.803 }, 00:12:27.803 "auth": { 00:12:27.803 "state": "completed", 00:12:27.803 "digest": "sha384", 00:12:27.803 "dhgroup": "ffdhe4096" 00:12:27.803 } 00:12:27.803 } 00:12:27.803 ]' 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.803 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.067 21:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.645 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.212 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.470 00:12:29.470 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.470 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.470 21:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.729 { 00:12:29.729 "cntlid": 77, 00:12:29.729 "qid": 0, 00:12:29.729 "state": "enabled", 00:12:29.729 "listen_address": { 00:12:29.729 "trtype": "TCP", 00:12:29.729 "adrfam": "IPv4", 00:12:29.729 "traddr": "10.0.0.2", 00:12:29.729 "trsvcid": "4420" 00:12:29.729 }, 00:12:29.729 "peer_address": { 00:12:29.729 "trtype": "TCP", 00:12:29.729 "adrfam": "IPv4", 00:12:29.729 "traddr": "10.0.0.1", 00:12:29.729 "trsvcid": "49496" 00:12:29.729 }, 00:12:29.729 "auth": { 00:12:29.729 "state": "completed", 00:12:29.729 "digest": "sha384", 00:12:29.729 "dhgroup": "ffdhe4096" 00:12:29.729 } 00:12:29.729 } 00:12:29.729 ]' 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.729 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.987 21:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.554 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.555 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:31.121 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.122 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.122 21:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.122 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.122 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.380 00:12:31.380 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.380 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.380 21:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.638 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.638 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.638 21:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.638 21:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.638 21:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.639 { 00:12:31.639 "cntlid": 79, 00:12:31.639 "qid": 0, 00:12:31.639 "state": "enabled", 00:12:31.639 "listen_address": { 00:12:31.639 "trtype": "TCP", 00:12:31.639 "adrfam": "IPv4", 00:12:31.639 "traddr": "10.0.0.2", 00:12:31.639 "trsvcid": "4420" 00:12:31.639 }, 00:12:31.639 "peer_address": { 00:12:31.639 "trtype": "TCP", 00:12:31.639 "adrfam": "IPv4", 00:12:31.639 "traddr": "10.0.0.1", 00:12:31.639 "trsvcid": "49530" 00:12:31.639 }, 00:12:31.639 "auth": { 00:12:31.639 "state": "completed", 00:12:31.639 "digest": "sha384", 00:12:31.639 "dhgroup": "ffdhe4096" 00:12:31.639 } 00:12:31.639 } 00:12:31.639 ]' 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.639 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.904 21:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.898 21:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.463 00:12:33.463 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.463 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.463 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.721 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.721 { 00:12:33.721 "cntlid": 81, 00:12:33.721 "qid": 0, 00:12:33.721 "state": "enabled", 00:12:33.721 "listen_address": { 00:12:33.721 "trtype": "TCP", 00:12:33.721 "adrfam": "IPv4", 00:12:33.721 "traddr": "10.0.0.2", 00:12:33.721 "trsvcid": "4420" 00:12:33.721 }, 00:12:33.721 "peer_address": { 00:12:33.721 "trtype": "TCP", 00:12:33.721 "adrfam": "IPv4", 00:12:33.721 "traddr": "10.0.0.1", 00:12:33.721 "trsvcid": "55624" 00:12:33.721 }, 00:12:33.721 "auth": { 00:12:33.721 "state": "completed", 00:12:33.721 "digest": "sha384", 00:12:33.721 "dhgroup": "ffdhe6144" 00:12:33.721 } 00:12:33.721 } 00:12:33.721 ]' 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.979 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.237 21:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:34.803 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.061 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.320 21:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.578 00:12:35.578 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.578 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.578 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.836 { 00:12:35.836 "cntlid": 83, 00:12:35.836 "qid": 0, 00:12:35.836 "state": "enabled", 00:12:35.836 "listen_address": { 00:12:35.836 "trtype": "TCP", 00:12:35.836 "adrfam": "IPv4", 00:12:35.836 "traddr": "10.0.0.2", 00:12:35.836 "trsvcid": "4420" 00:12:35.836 }, 00:12:35.836 "peer_address": { 00:12:35.836 "trtype": "TCP", 00:12:35.836 "adrfam": "IPv4", 00:12:35.836 "traddr": "10.0.0.1", 00:12:35.836 "trsvcid": "55644" 00:12:35.836 }, 00:12:35.836 "auth": { 00:12:35.836 "state": "completed", 00:12:35.836 "digest": "sha384", 00:12:35.836 "dhgroup": "ffdhe6144" 00:12:35.836 } 00:12:35.836 } 00:12:35.836 ]' 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.836 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.095 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.095 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.095 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.095 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.095 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.353 21:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.919 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.485 21:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.743 00:12:37.743 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.743 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.743 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.001 { 00:12:38.001 "cntlid": 85, 00:12:38.001 "qid": 0, 00:12:38.001 "state": "enabled", 00:12:38.001 "listen_address": { 00:12:38.001 "trtype": "TCP", 00:12:38.001 "adrfam": "IPv4", 00:12:38.001 "traddr": "10.0.0.2", 00:12:38.001 "trsvcid": "4420" 00:12:38.001 }, 00:12:38.001 "peer_address": { 00:12:38.001 "trtype": "TCP", 00:12:38.001 "adrfam": "IPv4", 00:12:38.001 "traddr": "10.0.0.1", 00:12:38.001 "trsvcid": "55666" 00:12:38.001 }, 00:12:38.001 "auth": { 00:12:38.001 "state": "completed", 00:12:38.001 "digest": "sha384", 00:12:38.001 "dhgroup": "ffdhe6144" 00:12:38.001 } 00:12:38.001 } 00:12:38.001 ]' 00:12:38.001 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.259 21:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.517 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.083 21:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.342 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.908 00:12:39.908 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.908 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.908 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.166 { 00:12:40.166 "cntlid": 87, 00:12:40.166 "qid": 0, 00:12:40.166 "state": "enabled", 00:12:40.166 "listen_address": { 00:12:40.166 "trtype": "TCP", 00:12:40.166 "adrfam": "IPv4", 00:12:40.166 "traddr": "10.0.0.2", 00:12:40.166 "trsvcid": "4420" 00:12:40.166 }, 00:12:40.166 "peer_address": { 00:12:40.166 "trtype": "TCP", 00:12:40.166 "adrfam": "IPv4", 00:12:40.166 "traddr": "10.0.0.1", 00:12:40.166 "trsvcid": "55706" 00:12:40.166 }, 00:12:40.166 "auth": { 00:12:40.166 "state": "completed", 00:12:40.166 "digest": "sha384", 00:12:40.166 "dhgroup": "ffdhe6144" 00:12:40.166 } 00:12:40.166 } 00:12:40.166 ]' 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.166 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.423 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.423 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.423 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.423 21:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.681 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:41.253 21:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.523 21:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.524 21:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.524 21:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.524 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.524 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.089 00:12:42.347 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.347 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.347 21:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.606 { 00:12:42.606 "cntlid": 89, 00:12:42.606 "qid": 0, 00:12:42.606 "state": "enabled", 00:12:42.606 "listen_address": { 00:12:42.606 "trtype": "TCP", 00:12:42.606 "adrfam": "IPv4", 00:12:42.606 "traddr": "10.0.0.2", 00:12:42.606 "trsvcid": "4420" 00:12:42.606 }, 00:12:42.606 "peer_address": { 00:12:42.606 "trtype": "TCP", 00:12:42.606 "adrfam": "IPv4", 00:12:42.606 "traddr": "10.0.0.1", 00:12:42.606 "trsvcid": "59782" 00:12:42.606 }, 00:12:42.606 "auth": { 00:12:42.606 "state": "completed", 00:12:42.606 "digest": "sha384", 00:12:42.606 "dhgroup": "ffdhe8192" 00:12:42.606 } 00:12:42.606 } 00:12:42.606 ]' 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.606 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.864 21:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:43.441 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.441 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:43.441 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.441 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.698 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.698 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.698 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.698 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.956 21:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.521 00:12:44.521 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.521 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.521 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.779 { 00:12:44.779 "cntlid": 91, 00:12:44.779 "qid": 0, 00:12:44.779 "state": "enabled", 00:12:44.779 "listen_address": { 00:12:44.779 "trtype": "TCP", 00:12:44.779 "adrfam": "IPv4", 00:12:44.779 "traddr": "10.0.0.2", 00:12:44.779 "trsvcid": "4420" 00:12:44.779 }, 00:12:44.779 "peer_address": { 00:12:44.779 "trtype": "TCP", 00:12:44.779 "adrfam": "IPv4", 00:12:44.779 "traddr": "10.0.0.1", 00:12:44.779 "trsvcid": "59806" 00:12:44.779 }, 00:12:44.779 "auth": { 00:12:44.779 "state": "completed", 00:12:44.779 "digest": "sha384", 00:12:44.779 "dhgroup": "ffdhe8192" 00:12:44.779 } 00:12:44.779 } 00:12:44.779 ]' 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.779 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.036 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.036 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.036 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.036 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.036 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.294 21:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:45.859 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.859 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:45.859 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.859 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.116 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.116 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.116 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.116 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.374 21:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.940 00:12:46.940 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.940 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.940 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.197 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.197 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.198 { 00:12:47.198 "cntlid": 93, 00:12:47.198 "qid": 0, 00:12:47.198 "state": "enabled", 00:12:47.198 "listen_address": { 00:12:47.198 "trtype": "TCP", 00:12:47.198 "adrfam": "IPv4", 00:12:47.198 "traddr": "10.0.0.2", 00:12:47.198 "trsvcid": "4420" 00:12:47.198 }, 00:12:47.198 "peer_address": { 00:12:47.198 "trtype": "TCP", 00:12:47.198 "adrfam": "IPv4", 00:12:47.198 "traddr": "10.0.0.1", 00:12:47.198 "trsvcid": "59832" 00:12:47.198 }, 00:12:47.198 "auth": { 00:12:47.198 "state": "completed", 00:12:47.198 "digest": "sha384", 00:12:47.198 "dhgroup": "ffdhe8192" 00:12:47.198 } 00:12:47.198 } 00:12:47.198 ]' 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.198 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.455 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.455 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.455 21:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.714 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.281 21:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.539 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.474 00:12:49.474 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.474 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.474 21:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.474 { 00:12:49.474 "cntlid": 95, 00:12:49.474 "qid": 0, 00:12:49.474 "state": "enabled", 00:12:49.474 "listen_address": { 00:12:49.474 "trtype": "TCP", 00:12:49.474 "adrfam": "IPv4", 00:12:49.474 "traddr": "10.0.0.2", 00:12:49.474 "trsvcid": "4420" 00:12:49.474 }, 00:12:49.474 "peer_address": { 00:12:49.474 "trtype": "TCP", 00:12:49.474 "adrfam": "IPv4", 00:12:49.474 "traddr": "10.0.0.1", 00:12:49.474 "trsvcid": "59870" 00:12:49.474 }, 00:12:49.474 "auth": { 00:12:49.474 "state": "completed", 00:12:49.474 "digest": "sha384", 00:12:49.474 "dhgroup": "ffdhe8192" 00:12:49.474 } 00:12:49.474 } 00:12:49.474 ]' 00:12:49.474 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.733 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.991 21:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.951 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.208 21:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.209 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.209 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.466 00:12:51.466 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.466 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.466 21:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.724 { 00:12:51.724 "cntlid": 97, 00:12:51.724 "qid": 0, 00:12:51.724 "state": "enabled", 00:12:51.724 "listen_address": { 00:12:51.724 "trtype": "TCP", 00:12:51.724 "adrfam": "IPv4", 00:12:51.724 "traddr": "10.0.0.2", 00:12:51.724 "trsvcid": "4420" 00:12:51.724 }, 00:12:51.724 "peer_address": { 00:12:51.724 "trtype": "TCP", 00:12:51.724 "adrfam": "IPv4", 00:12:51.724 "traddr": "10.0.0.1", 00:12:51.724 "trsvcid": "54826" 00:12:51.724 }, 00:12:51.724 "auth": { 00:12:51.724 "state": "completed", 00:12:51.724 "digest": "sha512", 00:12:51.724 "dhgroup": "null" 00:12:51.724 } 00:12:51.724 } 00:12:51.724 ]' 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.724 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.983 21:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:12:52.550 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.550 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:52.550 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.550 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.808 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.808 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.808 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.808 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.066 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.324 00:12:53.324 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.324 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.324 21:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.583 { 00:12:53.583 "cntlid": 99, 00:12:53.583 "qid": 0, 00:12:53.583 "state": "enabled", 00:12:53.583 "listen_address": { 00:12:53.583 "trtype": "TCP", 00:12:53.583 "adrfam": "IPv4", 00:12:53.583 "traddr": "10.0.0.2", 00:12:53.583 "trsvcid": "4420" 00:12:53.583 }, 00:12:53.583 "peer_address": { 00:12:53.583 "trtype": "TCP", 00:12:53.583 "adrfam": "IPv4", 00:12:53.583 "traddr": "10.0.0.1", 00:12:53.583 "trsvcid": "54844" 00:12:53.583 }, 00:12:53.583 "auth": { 00:12:53.583 "state": "completed", 00:12:53.583 "digest": "sha512", 00:12:53.583 "dhgroup": "null" 00:12:53.583 } 00:12:53.583 } 00:12:53.583 ]' 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.583 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.149 21:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.716 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.975 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.233 00:12:55.233 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.233 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.233 21:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.800 { 00:12:55.800 "cntlid": 101, 00:12:55.800 "qid": 0, 00:12:55.800 "state": "enabled", 00:12:55.800 "listen_address": { 00:12:55.800 "trtype": "TCP", 00:12:55.800 "adrfam": "IPv4", 00:12:55.800 "traddr": "10.0.0.2", 00:12:55.800 "trsvcid": "4420" 00:12:55.800 }, 00:12:55.800 "peer_address": { 00:12:55.800 "trtype": "TCP", 00:12:55.800 "adrfam": "IPv4", 00:12:55.800 "traddr": "10.0.0.1", 00:12:55.800 "trsvcid": "54870" 00:12:55.800 }, 00:12:55.800 "auth": { 00:12:55.800 "state": "completed", 00:12:55.800 "digest": "sha512", 00:12:55.800 "dhgroup": "null" 00:12:55.800 } 00:12:55.800 } 00:12:55.800 ]' 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:55.800 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.801 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.801 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.801 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.059 21:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.993 21:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.560 00:12:57.560 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.560 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.560 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.819 { 00:12:57.819 "cntlid": 103, 00:12:57.819 "qid": 0, 00:12:57.819 "state": "enabled", 00:12:57.819 "listen_address": { 00:12:57.819 "trtype": "TCP", 00:12:57.819 "adrfam": "IPv4", 00:12:57.819 "traddr": "10.0.0.2", 00:12:57.819 "trsvcid": "4420" 00:12:57.819 }, 00:12:57.819 "peer_address": { 00:12:57.819 "trtype": "TCP", 00:12:57.819 "adrfam": "IPv4", 00:12:57.819 "traddr": "10.0.0.1", 00:12:57.819 "trsvcid": "54888" 00:12:57.819 }, 00:12:57.819 "auth": { 00:12:57.819 "state": "completed", 00:12:57.819 "digest": "sha512", 00:12:57.819 "dhgroup": "null" 00:12:57.819 } 00:12:57.819 } 00:12:57.819 ]' 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.819 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:57.820 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.820 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.820 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.820 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.078 21:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.014 21:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.639 00:12:59.639 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.639 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.639 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.898 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.898 { 00:12:59.898 "cntlid": 105, 00:12:59.898 "qid": 0, 00:12:59.898 "state": "enabled", 00:12:59.898 "listen_address": { 00:12:59.898 "trtype": "TCP", 00:12:59.898 "adrfam": "IPv4", 00:12:59.898 "traddr": "10.0.0.2", 00:12:59.898 "trsvcid": "4420" 00:12:59.898 }, 00:12:59.898 "peer_address": { 00:12:59.898 "trtype": "TCP", 00:12:59.898 "adrfam": "IPv4", 00:12:59.898 "traddr": "10.0.0.1", 00:12:59.898 "trsvcid": "54920" 00:12:59.898 }, 00:12:59.898 "auth": { 00:12:59.898 "state": "completed", 00:12:59.898 "digest": "sha512", 00:12:59.898 "dhgroup": "ffdhe2048" 00:12:59.898 } 00:12:59.898 } 00:12:59.898 ]' 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.899 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.157 21:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.091 21:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.657 00:13:01.657 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.657 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.657 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.915 { 00:13:01.915 "cntlid": 107, 00:13:01.915 "qid": 0, 00:13:01.915 "state": "enabled", 00:13:01.915 "listen_address": { 00:13:01.915 "trtype": "TCP", 00:13:01.915 "adrfam": "IPv4", 00:13:01.915 "traddr": "10.0.0.2", 00:13:01.915 "trsvcid": "4420" 00:13:01.915 }, 00:13:01.915 "peer_address": { 00:13:01.915 "trtype": "TCP", 00:13:01.915 "adrfam": "IPv4", 00:13:01.915 "traddr": "10.0.0.1", 00:13:01.915 "trsvcid": "44286" 00:13:01.915 }, 00:13:01.915 "auth": { 00:13:01.915 "state": "completed", 00:13:01.915 "digest": "sha512", 00:13:01.915 "dhgroup": "ffdhe2048" 00:13:01.915 } 00:13:01.915 } 00:13:01.915 ]' 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.915 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.174 21:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:13:02.740 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.741 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.999 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.257 21:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.257 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.257 21:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.515 00:13:03.515 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.515 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.515 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.774 { 00:13:03.774 "cntlid": 109, 00:13:03.774 "qid": 0, 00:13:03.774 "state": "enabled", 00:13:03.774 "listen_address": { 00:13:03.774 "trtype": "TCP", 00:13:03.774 "adrfam": "IPv4", 00:13:03.774 "traddr": "10.0.0.2", 00:13:03.774 "trsvcid": "4420" 00:13:03.774 }, 00:13:03.774 "peer_address": { 00:13:03.774 "trtype": "TCP", 00:13:03.774 "adrfam": "IPv4", 00:13:03.774 "traddr": "10.0.0.1", 00:13:03.774 "trsvcid": "44308" 00:13:03.774 }, 00:13:03.774 "auth": { 00:13:03.774 "state": "completed", 00:13:03.774 "digest": "sha512", 00:13:03.774 "dhgroup": "ffdhe2048" 00:13:03.774 } 00:13:03.774 } 00:13:03.774 ]' 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.774 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.033 21:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.967 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.968 21:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.535 00:13:05.535 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.535 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.535 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.793 { 00:13:05.793 "cntlid": 111, 00:13:05.793 "qid": 0, 00:13:05.793 "state": "enabled", 00:13:05.793 "listen_address": { 00:13:05.793 "trtype": "TCP", 00:13:05.793 "adrfam": "IPv4", 00:13:05.793 "traddr": "10.0.0.2", 00:13:05.793 "trsvcid": "4420" 00:13:05.793 }, 00:13:05.793 "peer_address": { 00:13:05.793 "trtype": "TCP", 00:13:05.793 "adrfam": "IPv4", 00:13:05.793 "traddr": "10.0.0.1", 00:13:05.793 "trsvcid": "44340" 00:13:05.793 }, 00:13:05.793 "auth": { 00:13:05.793 "state": "completed", 00:13:05.793 "digest": "sha512", 00:13:05.793 "dhgroup": "ffdhe2048" 00:13:05.793 } 00:13:05.793 } 00:13:05.793 ]' 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.793 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.052 21:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.987 21:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.553 00:13:07.553 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.553 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.553 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.815 { 00:13:07.815 "cntlid": 113, 00:13:07.815 "qid": 0, 00:13:07.815 "state": "enabled", 00:13:07.815 "listen_address": { 00:13:07.815 "trtype": "TCP", 00:13:07.815 "adrfam": "IPv4", 00:13:07.815 "traddr": "10.0.0.2", 00:13:07.815 "trsvcid": "4420" 00:13:07.815 }, 00:13:07.815 "peer_address": { 00:13:07.815 "trtype": "TCP", 00:13:07.815 "adrfam": "IPv4", 00:13:07.815 "traddr": "10.0.0.1", 00:13:07.815 "trsvcid": "44366" 00:13:07.815 }, 00:13:07.815 "auth": { 00:13:07.815 "state": "completed", 00:13:07.815 "digest": "sha512", 00:13:07.815 "dhgroup": "ffdhe3072" 00:13:07.815 } 00:13:07.815 } 00:13:07.815 ]' 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.815 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.075 21:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.024 21:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.283 00:13:09.542 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.542 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.542 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.801 { 00:13:09.801 "cntlid": 115, 00:13:09.801 "qid": 0, 00:13:09.801 "state": "enabled", 00:13:09.801 "listen_address": { 00:13:09.801 "trtype": "TCP", 00:13:09.801 "adrfam": "IPv4", 00:13:09.801 "traddr": "10.0.0.2", 00:13:09.801 "trsvcid": "4420" 00:13:09.801 }, 00:13:09.801 "peer_address": { 00:13:09.801 "trtype": "TCP", 00:13:09.801 "adrfam": "IPv4", 00:13:09.801 "traddr": "10.0.0.1", 00:13:09.801 "trsvcid": "44396" 00:13:09.801 }, 00:13:09.801 "auth": { 00:13:09.801 "state": "completed", 00:13:09.801 "digest": "sha512", 00:13:09.801 "dhgroup": "ffdhe3072" 00:13:09.801 } 00:13:09.801 } 00:13:09.801 ]' 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.801 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.059 21:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.995 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.254 00:13:11.254 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.254 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.254 21:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.512 { 00:13:11.512 "cntlid": 117, 00:13:11.512 "qid": 0, 00:13:11.512 "state": "enabled", 00:13:11.512 "listen_address": { 00:13:11.512 "trtype": "TCP", 00:13:11.512 "adrfam": "IPv4", 00:13:11.512 "traddr": "10.0.0.2", 00:13:11.512 "trsvcid": "4420" 00:13:11.512 }, 00:13:11.512 "peer_address": { 00:13:11.512 "trtype": "TCP", 00:13:11.512 "adrfam": "IPv4", 00:13:11.512 "traddr": "10.0.0.1", 00:13:11.512 "trsvcid": "44414" 00:13:11.512 }, 00:13:11.512 "auth": { 00:13:11.512 "state": "completed", 00:13:11.512 "digest": "sha512", 00:13:11.512 "dhgroup": "ffdhe3072" 00:13:11.512 } 00:13:11.512 } 00:13:11.512 ]' 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.512 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.771 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.771 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.771 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.771 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.771 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.030 21:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.596 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.865 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.150 00:13:13.408 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.408 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.408 21:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.408 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.408 { 00:13:13.408 "cntlid": 119, 00:13:13.408 "qid": 0, 00:13:13.408 "state": "enabled", 00:13:13.408 "listen_address": { 00:13:13.409 "trtype": "TCP", 00:13:13.409 "adrfam": "IPv4", 00:13:13.409 "traddr": "10.0.0.2", 00:13:13.409 "trsvcid": "4420" 00:13:13.409 }, 00:13:13.409 "peer_address": { 00:13:13.409 "trtype": "TCP", 00:13:13.409 "adrfam": "IPv4", 00:13:13.409 "traddr": "10.0.0.1", 00:13:13.409 "trsvcid": "59262" 00:13:13.409 }, 00:13:13.409 "auth": { 00:13:13.409 "state": "completed", 00:13:13.409 "digest": "sha512", 00:13:13.409 "dhgroup": "ffdhe3072" 00:13:13.409 } 00:13:13.409 } 00:13:13.409 ]' 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.667 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.926 21:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.493 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.059 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.317 00:13:15.317 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.317 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.317 21:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.575 { 00:13:15.575 "cntlid": 121, 00:13:15.575 "qid": 0, 00:13:15.575 "state": "enabled", 00:13:15.575 "listen_address": { 00:13:15.575 "trtype": "TCP", 00:13:15.575 "adrfam": "IPv4", 00:13:15.575 "traddr": "10.0.0.2", 00:13:15.575 "trsvcid": "4420" 00:13:15.575 }, 00:13:15.575 "peer_address": { 00:13:15.575 "trtype": "TCP", 00:13:15.575 "adrfam": "IPv4", 00:13:15.575 "traddr": "10.0.0.1", 00:13:15.575 "trsvcid": "59278" 00:13:15.575 }, 00:13:15.575 "auth": { 00:13:15.575 "state": "completed", 00:13:15.575 "digest": "sha512", 00:13:15.575 "dhgroup": "ffdhe4096" 00:13:15.575 } 00:13:15.575 } 00:13:15.575 ]' 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.575 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.142 21:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.707 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.965 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.234 00:13:17.509 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.509 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.509 21:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.509 { 00:13:17.509 "cntlid": 123, 00:13:17.509 "qid": 0, 00:13:17.509 "state": "enabled", 00:13:17.509 "listen_address": { 00:13:17.509 "trtype": "TCP", 00:13:17.509 "adrfam": "IPv4", 00:13:17.509 "traddr": "10.0.0.2", 00:13:17.509 "trsvcid": "4420" 00:13:17.509 }, 00:13:17.509 "peer_address": { 00:13:17.509 "trtype": "TCP", 00:13:17.509 "adrfam": "IPv4", 00:13:17.509 "traddr": "10.0.0.1", 00:13:17.509 "trsvcid": "59298" 00:13:17.509 }, 00:13:17.509 "auth": { 00:13:17.509 "state": "completed", 00:13:17.509 "digest": "sha512", 00:13:17.509 "dhgroup": "ffdhe4096" 00:13:17.509 } 00:13:17.509 } 00:13:17.509 ]' 00:13:17.509 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.768 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.026 21:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.591 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:19.155 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:19.155 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.155 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:19.155 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:19.155 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.156 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.413 00:13:19.413 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.413 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.413 21:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.671 { 00:13:19.671 "cntlid": 125, 00:13:19.671 "qid": 0, 00:13:19.671 "state": "enabled", 00:13:19.671 "listen_address": { 00:13:19.671 "trtype": "TCP", 00:13:19.671 "adrfam": "IPv4", 00:13:19.671 "traddr": "10.0.0.2", 00:13:19.671 "trsvcid": "4420" 00:13:19.671 }, 00:13:19.671 "peer_address": { 00:13:19.671 "trtype": "TCP", 00:13:19.671 "adrfam": "IPv4", 00:13:19.671 "traddr": "10.0.0.1", 00:13:19.671 "trsvcid": "59316" 00:13:19.671 }, 00:13:19.671 "auth": { 00:13:19.671 "state": "completed", 00:13:19.671 "digest": "sha512", 00:13:19.671 "dhgroup": "ffdhe4096" 00:13:19.671 } 00:13:19.671 } 00:13:19.671 ]' 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.671 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.237 21:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.802 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.062 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:21.323 00:13:21.323 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.323 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.323 21:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.585 { 00:13:21.585 "cntlid": 127, 00:13:21.585 "qid": 0, 00:13:21.585 "state": "enabled", 00:13:21.585 "listen_address": { 00:13:21.585 "trtype": "TCP", 00:13:21.585 "adrfam": "IPv4", 00:13:21.585 "traddr": "10.0.0.2", 00:13:21.585 "trsvcid": "4420" 00:13:21.585 }, 00:13:21.585 "peer_address": { 00:13:21.585 "trtype": "TCP", 00:13:21.585 "adrfam": "IPv4", 00:13:21.585 "traddr": "10.0.0.1", 00:13:21.585 "trsvcid": "59350" 00:13:21.585 }, 00:13:21.585 "auth": { 00:13:21.585 "state": "completed", 00:13:21.585 "digest": "sha512", 00:13:21.585 "dhgroup": "ffdhe4096" 00:13:21.585 } 00:13:21.585 } 00:13:21.585 ]' 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.585 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.881 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.881 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.881 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.881 21:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:22.815 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.072 21:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.329 00:13:23.329 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.329 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.329 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.894 { 00:13:23.894 "cntlid": 129, 00:13:23.894 "qid": 0, 00:13:23.894 "state": "enabled", 00:13:23.894 "listen_address": { 00:13:23.894 "trtype": "TCP", 00:13:23.894 "adrfam": "IPv4", 00:13:23.894 "traddr": "10.0.0.2", 00:13:23.894 "trsvcid": "4420" 00:13:23.894 }, 00:13:23.894 "peer_address": { 00:13:23.894 "trtype": "TCP", 00:13:23.894 "adrfam": "IPv4", 00:13:23.894 "traddr": "10.0.0.1", 00:13:23.894 "trsvcid": "57034" 00:13:23.894 }, 00:13:23.894 "auth": { 00:13:23.894 "state": "completed", 00:13:23.894 "digest": "sha512", 00:13:23.894 "dhgroup": "ffdhe6144" 00:13:23.894 } 00:13:23.894 } 00:13:23.894 ]' 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.894 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.152 21:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.717 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.975 21:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.540 00:13:25.540 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.540 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.540 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.798 { 00:13:25.798 "cntlid": 131, 00:13:25.798 "qid": 0, 00:13:25.798 "state": "enabled", 00:13:25.798 "listen_address": { 00:13:25.798 "trtype": "TCP", 00:13:25.798 "adrfam": "IPv4", 00:13:25.798 "traddr": "10.0.0.2", 00:13:25.798 "trsvcid": "4420" 00:13:25.798 }, 00:13:25.798 "peer_address": { 00:13:25.798 "trtype": "TCP", 00:13:25.798 "adrfam": "IPv4", 00:13:25.798 "traddr": "10.0.0.1", 00:13:25.798 "trsvcid": "57052" 00:13:25.798 }, 00:13:25.798 "auth": { 00:13:25.798 "state": "completed", 00:13:25.798 "digest": "sha512", 00:13:25.798 "dhgroup": "ffdhe6144" 00:13:25.798 } 00:13:25.798 } 00:13:25.798 ]' 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.798 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.057 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:26.057 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.057 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.057 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.057 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.315 21:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:13:26.880 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.880 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:26.880 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.880 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.138 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.138 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.138 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:27.138 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.396 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.397 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.397 21:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.397 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.397 21:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.655 00:13:27.912 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.912 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.912 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.170 { 00:13:28.170 "cntlid": 133, 00:13:28.170 "qid": 0, 00:13:28.170 "state": "enabled", 00:13:28.170 "listen_address": { 00:13:28.170 "trtype": "TCP", 00:13:28.170 "adrfam": "IPv4", 00:13:28.170 "traddr": "10.0.0.2", 00:13:28.170 "trsvcid": "4420" 00:13:28.170 }, 00:13:28.170 "peer_address": { 00:13:28.170 "trtype": "TCP", 00:13:28.170 "adrfam": "IPv4", 00:13:28.170 "traddr": "10.0.0.1", 00:13:28.170 "trsvcid": "57080" 00:13:28.170 }, 00:13:28.170 "auth": { 00:13:28.170 "state": "completed", 00:13:28.170 "digest": "sha512", 00:13:28.170 "dhgroup": "ffdhe6144" 00:13:28.170 } 00:13:28.170 } 00:13:28.170 ]' 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.170 21:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.736 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.302 21:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:29.578 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:29.867 00:13:29.867 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.867 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.867 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.136 { 00:13:30.136 "cntlid": 135, 00:13:30.136 "qid": 0, 00:13:30.136 "state": "enabled", 00:13:30.136 "listen_address": { 00:13:30.136 "trtype": "TCP", 00:13:30.136 "adrfam": "IPv4", 00:13:30.136 "traddr": "10.0.0.2", 00:13:30.136 "trsvcid": "4420" 00:13:30.136 }, 00:13:30.136 "peer_address": { 00:13:30.136 "trtype": "TCP", 00:13:30.136 "adrfam": "IPv4", 00:13:30.136 "traddr": "10.0.0.1", 00:13:30.136 "trsvcid": "57110" 00:13:30.136 }, 00:13:30.136 "auth": { 00:13:30.136 "state": "completed", 00:13:30.136 "digest": "sha512", 00:13:30.136 "dhgroup": "ffdhe6144" 00:13:30.136 } 00:13:30.136 } 00:13:30.136 ]' 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.136 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.393 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:30.393 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.393 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.393 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.393 21:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.652 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.218 21:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.476 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.042 00:13:32.042 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.042 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.042 21:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.300 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.300 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.300 21:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.300 21:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.558 { 00:13:32.558 "cntlid": 137, 00:13:32.558 "qid": 0, 00:13:32.558 "state": "enabled", 00:13:32.558 "listen_address": { 00:13:32.558 "trtype": "TCP", 00:13:32.558 "adrfam": "IPv4", 00:13:32.558 "traddr": "10.0.0.2", 00:13:32.558 "trsvcid": "4420" 00:13:32.558 }, 00:13:32.558 "peer_address": { 00:13:32.558 "trtype": "TCP", 00:13:32.558 "adrfam": "IPv4", 00:13:32.558 "traddr": "10.0.0.1", 00:13:32.558 "trsvcid": "41398" 00:13:32.558 }, 00:13:32.558 "auth": { 00:13:32.558 "state": "completed", 00:13:32.558 "digest": "sha512", 00:13:32.558 "dhgroup": "ffdhe8192" 00:13:32.558 } 00:13:32.558 } 00:13:32.558 ]' 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.558 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.816 21:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.751 21:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.318 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.576 21:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.834 { 00:13:34.834 "cntlid": 139, 00:13:34.834 "qid": 0, 00:13:34.834 "state": "enabled", 00:13:34.834 "listen_address": { 00:13:34.834 "trtype": "TCP", 00:13:34.834 "adrfam": "IPv4", 00:13:34.834 "traddr": "10.0.0.2", 00:13:34.834 "trsvcid": "4420" 00:13:34.834 }, 00:13:34.834 "peer_address": { 00:13:34.834 "trtype": "TCP", 00:13:34.834 "adrfam": "IPv4", 00:13:34.834 "traddr": "10.0.0.1", 00:13:34.834 "trsvcid": "41408" 00:13:34.834 }, 00:13:34.834 "auth": { 00:13:34.834 "state": "completed", 00:13:34.834 "digest": "sha512", 00:13:34.834 "dhgroup": "ffdhe8192" 00:13:34.834 } 00:13:34.834 } 00:13:34.834 ]' 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.834 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.092 21:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:01:Y2E4YzllMjQ0ZjI1MzJmYmUxN2VmZmIyNWE4NTlhNDD8Y+FZ: --dhchap-ctrl-secret DHHC-1:02:NjZkYTI4ZTY3OGEyOTNlOTdkZmY3OTRiNWYzNDMwNjM1MDZlOGVjZjgwNzZiMmYxhzIh9w==: 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.026 21:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.959 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.959 { 00:13:36.959 "cntlid": 141, 00:13:36.959 "qid": 0, 00:13:36.959 "state": "enabled", 00:13:36.959 "listen_address": { 00:13:36.959 "trtype": "TCP", 00:13:36.959 "adrfam": "IPv4", 00:13:36.959 "traddr": "10.0.0.2", 00:13:36.959 "trsvcid": "4420" 00:13:36.959 }, 00:13:36.959 "peer_address": { 00:13:36.959 "trtype": "TCP", 00:13:36.959 "adrfam": "IPv4", 00:13:36.959 "traddr": "10.0.0.1", 00:13:36.959 "trsvcid": "41428" 00:13:36.959 }, 00:13:36.959 "auth": { 00:13:36.959 "state": "completed", 00:13:36.959 "digest": "sha512", 00:13:36.959 "dhgroup": "ffdhe8192" 00:13:36.959 } 00:13:36.959 } 00:13:36.959 ]' 00:13:36.959 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.218 21:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.477 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:02:Yjg4NThlOWYzMzQ5ZDg3ODBkZjdhOGEwMmFhNmNhNjY2ODg1NWNlZDAyMGU1NmQ55bz01A==: --dhchap-ctrl-secret DHHC-1:01:YWMzMmMxNTBlNjZmNDllM2E3NDJhYWQwZjg2ZmJlOTZjideS: 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.055 21:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.312 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.313 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:38.878 00:13:39.136 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.136 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.136 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.136 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.393 { 00:13:39.393 "cntlid": 143, 00:13:39.393 "qid": 0, 00:13:39.393 "state": "enabled", 00:13:39.393 "listen_address": { 00:13:39.393 "trtype": "TCP", 00:13:39.393 "adrfam": "IPv4", 00:13:39.393 "traddr": "10.0.0.2", 00:13:39.393 "trsvcid": "4420" 00:13:39.393 }, 00:13:39.393 "peer_address": { 00:13:39.393 "trtype": "TCP", 00:13:39.393 "adrfam": "IPv4", 00:13:39.393 "traddr": "10.0.0.1", 00:13:39.393 "trsvcid": "41460" 00:13:39.393 }, 00:13:39.393 "auth": { 00:13:39.393 "state": "completed", 00:13:39.393 "digest": "sha512", 00:13:39.393 "dhgroup": "ffdhe8192" 00:13:39.393 } 00:13:39.393 } 00:13:39.393 ]' 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.393 21:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.393 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.393 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.393 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.650 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:40.584 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.584 21:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:40.584 21:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.584 21:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.584 21:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.584 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.842 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.408 00:13:41.408 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.408 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.408 21:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.666 { 00:13:41.666 "cntlid": 145, 00:13:41.666 "qid": 0, 00:13:41.666 "state": "enabled", 00:13:41.666 "listen_address": { 00:13:41.666 "trtype": "TCP", 00:13:41.666 "adrfam": "IPv4", 00:13:41.666 "traddr": "10.0.0.2", 00:13:41.666 "trsvcid": "4420" 00:13:41.666 }, 00:13:41.666 "peer_address": { 00:13:41.666 "trtype": "TCP", 00:13:41.666 "adrfam": "IPv4", 00:13:41.666 "traddr": "10.0.0.1", 00:13:41.666 "trsvcid": "41474" 00:13:41.666 }, 00:13:41.666 "auth": { 00:13:41.666 "state": "completed", 00:13:41.666 "digest": "sha512", 00:13:41.666 "dhgroup": "ffdhe8192" 00:13:41.666 } 00:13:41.666 } 00:13:41.666 ]' 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.666 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.924 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.924 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.924 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.182 21:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:00:NzM4MzNkNjQ4ZTcyOWI3NDdkMDg5MzFkNWE0MGY1NDZjYmY2Y2M2M2VmYTM3MzUwWb2xdA==: --dhchap-ctrl-secret DHHC-1:03:OTYyNzQxYmUzZGNiNDRkOTYxMjYwMzVkMWRkNmVjNTE1NTE2YWQ5NWVjZTcwMzc5YjZiYTVkOWU1NGFkMzNkYykG7Zg=: 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:42.749 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:43.315 request: 00:13:43.315 { 00:13:43.315 "name": "nvme0", 00:13:43.315 "trtype": "tcp", 00:13:43.315 "traddr": "10.0.0.2", 00:13:43.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:43.315 "adrfam": "ipv4", 00:13:43.315 "trsvcid": "4420", 00:13:43.315 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.315 "dhchap_key": "key2", 00:13:43.315 "method": "bdev_nvme_attach_controller", 00:13:43.315 "req_id": 1 00:13:43.315 } 00:13:43.315 Got JSON-RPC error response 00:13:43.315 response: 00:13:43.315 { 00:13:43.315 "code": -5, 00:13:43.315 "message": "Input/output error" 00:13:43.315 } 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:43.315 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.316 21:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.883 request: 00:13:43.883 { 00:13:43.883 "name": "nvme0", 00:13:43.883 "trtype": "tcp", 00:13:43.883 "traddr": "10.0.0.2", 00:13:43.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:43.883 "adrfam": "ipv4", 00:13:43.883 "trsvcid": "4420", 00:13:43.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.883 "dhchap_key": "key1", 00:13:43.883 "dhchap_ctrlr_key": "ckey2", 00:13:43.883 "method": "bdev_nvme_attach_controller", 00:13:43.883 "req_id": 1 00:13:43.883 } 00:13:43.883 Got JSON-RPC error response 00:13:43.883 response: 00:13:43.883 { 00:13:43.883 "code": -5, 00:13:43.883 "message": "Input/output error" 00:13:43.883 } 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key1 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.883 21:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.450 request: 00:13:44.450 { 00:13:44.450 "name": "nvme0", 00:13:44.450 "trtype": "tcp", 00:13:44.450 "traddr": "10.0.0.2", 00:13:44.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:44.450 "adrfam": "ipv4", 00:13:44.450 "trsvcid": "4420", 00:13:44.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:44.450 "dhchap_key": "key1", 00:13:44.450 "dhchap_ctrlr_key": "ckey1", 00:13:44.450 "method": "bdev_nvme_attach_controller", 00:13:44.450 "req_id": 1 00:13:44.450 } 00:13:44.450 Got JSON-RPC error response 00:13:44.450 response: 00:13:44.450 { 00:13:44.450 "code": -5, 00:13:44.450 "message": "Input/output error" 00:13:44.450 } 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 81307 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 81307 ']' 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 81307 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81307 00:13:44.450 killing process with pid 81307 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81307' 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 81307 00:13:44.450 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 81307 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=84346 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 84346 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 84346 ']' 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.707 21:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 84346 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 84346 ']' 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.643 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.901 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.901 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:45.901 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:45.901 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.901 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.159 21:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.744 00:13:46.744 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.744 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.744 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.009 { 00:13:47.009 "cntlid": 1, 00:13:47.009 "qid": 0, 00:13:47.009 "state": "enabled", 00:13:47.009 "listen_address": { 00:13:47.009 "trtype": "TCP", 00:13:47.009 "adrfam": "IPv4", 00:13:47.009 "traddr": "10.0.0.2", 00:13:47.009 "trsvcid": "4420" 00:13:47.009 }, 00:13:47.009 "peer_address": { 00:13:47.009 "trtype": "TCP", 00:13:47.009 "adrfam": "IPv4", 00:13:47.009 "traddr": "10.0.0.1", 00:13:47.009 "trsvcid": "37758" 00:13:47.009 }, 00:13:47.009 "auth": { 00:13:47.009 "state": "completed", 00:13:47.009 "digest": "sha512", 00:13:47.009 "dhgroup": "ffdhe8192" 00:13:47.009 } 00:13:47.009 } 00:13:47.009 ]' 00:13:47.009 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.267 21:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.525 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-secret DHHC-1:03:MzM0Y2JhNzQ2ZTUyZGYwM2ZjZTJhMjc2OWEyZWE5YzRlMWRlMTlmMGFiYjczMDc5MjgwYzU5YjI5YjRjZmRmNhUOfkg=: 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --dhchap-key key3 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:48.457 21:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.457 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.715 request: 00:13:48.715 { 00:13:48.715 "name": "nvme0", 00:13:48.715 "trtype": "tcp", 00:13:48.715 "traddr": "10.0.0.2", 00:13:48.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:48.715 "adrfam": "ipv4", 00:13:48.715 "trsvcid": "4420", 00:13:48.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.715 "dhchap_key": "key3", 00:13:48.715 "method": "bdev_nvme_attach_controller", 00:13:48.715 "req_id": 1 00:13:48.715 } 00:13:48.715 Got JSON-RPC error response 00:13:48.715 response: 00:13:48.715 { 00:13:48.715 "code": -5, 00:13:48.715 "message": "Input/output error" 00:13:48.715 } 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:48.715 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:48.716 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:48.716 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:48.973 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.973 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.974 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:49.232 request: 00:13:49.232 { 00:13:49.232 "name": "nvme0", 00:13:49.232 "trtype": "tcp", 00:13:49.232 "traddr": "10.0.0.2", 00:13:49.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:49.232 "adrfam": "ipv4", 00:13:49.232 "trsvcid": "4420", 00:13:49.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:49.232 "dhchap_key": "key3", 00:13:49.232 "method": "bdev_nvme_attach_controller", 00:13:49.232 "req_id": 1 00:13:49.232 } 00:13:49.232 Got JSON-RPC error response 00:13:49.232 response: 00:13:49.232 { 00:13:49.232 "code": -5, 00:13:49.232 "message": "Input/output error" 00:13:49.232 } 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.490 21:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.749 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.749 request: 00:13:49.749 { 00:13:49.749 "name": "nvme0", 00:13:49.749 "trtype": "tcp", 00:13:49.749 "traddr": "10.0.0.2", 00:13:49.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11", 00:13:49.749 "adrfam": "ipv4", 00:13:49.749 "trsvcid": "4420", 00:13:49.749 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:49.749 "dhchap_key": "key0", 00:13:49.749 "dhchap_ctrlr_key": "key1", 00:13:49.749 "method": "bdev_nvme_attach_controller", 00:13:49.749 "req_id": 1 00:13:49.749 } 00:13:49.749 Got JSON-RPC error response 00:13:49.749 response: 00:13:49.749 { 00:13:49.749 "code": -5, 00:13:49.749 "message": "Input/output error" 00:13:49.749 } 00:13:50.007 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:50.007 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:50.007 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:50.007 21:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:50.008 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:50.008 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:50.265 00:13:50.265 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:50.265 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.265 21:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:50.523 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.523 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.523 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81345 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 81345 ']' 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 81345 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81345 00:13:50.781 killing process with pid 81345 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81345' 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 81345 00:13:50.781 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 81345 00:13:51.039 21:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.040 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.298 rmmod nvme_tcp 00:13:51.298 rmmod nvme_fabrics 00:13:51.298 rmmod nvme_keyring 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 84346 ']' 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 84346 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 84346 ']' 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 84346 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84346 00:13:51.298 killing process with pid 84346 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84346' 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 84346 00:13:51.298 21:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 84346 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rhD /tmp/spdk.key-sha256.Mav /tmp/spdk.key-sha384.nX1 /tmp/spdk.key-sha512.Aeu /tmp/spdk.key-sha512.GWb /tmp/spdk.key-sha384.s3W /tmp/spdk.key-sha256.P5L '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:51.556 00:13:51.556 real 2m49.062s 00:13:51.556 user 6m44.563s 00:13:51.556 sys 0m26.399s 00:13:51.556 ************************************ 00:13:51.556 END TEST nvmf_auth_target 00:13:51.556 ************************************ 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.556 21:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.556 21:53:57 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:51.556 21:53:57 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:51.556 21:53:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:51.556 21:53:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.556 21:53:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.556 ************************************ 00:13:51.556 START TEST nvmf_bdevio_no_huge 00:13:51.556 ************************************ 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:51.556 * Looking for test storage... 00:13:51.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.556 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.557 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:51.815 Cannot find device "nvmf_tgt_br" 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.815 Cannot find device "nvmf_tgt_br2" 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:51.815 Cannot find device "nvmf_tgt_br" 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:51.815 Cannot find device "nvmf_tgt_br2" 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.815 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.816 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:52.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:52.074 00:13:52.074 --- 10.0.0.2 ping statistics --- 00:13:52.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.074 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:52.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:52.074 00:13:52.074 --- 10.0.0.3 ping statistics --- 00:13:52.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.074 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:52.074 00:13:52.074 --- 10.0.0.1 ping statistics --- 00:13:52.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.074 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=84662 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 84662 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 84662 ']' 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:52.074 21:53:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:52.074 [2024-07-24 21:53:57.651734] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:52.074 [2024-07-24 21:53:57.652376] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:52.332 [2024-07-24 21:53:57.801983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.332 [2024-07-24 21:53:57.919282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.332 [2024-07-24 21:53:57.919350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.332 [2024-07-24 21:53:57.919376] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.332 [2024-07-24 21:53:57.919387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.332 [2024-07-24 21:53:57.919396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.332 [2024-07-24 21:53:57.919573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.332 [2024-07-24 21:53:57.919695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:52.332 [2024-07-24 21:53:57.920299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:52.332 [2024-07-24 21:53:57.920310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.332 [2024-07-24 21:53:57.926462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.263 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 [2024-07-24 21:53:58.695789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 Malloc0 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:53.264 [2024-07-24 21:53:58.740124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.264 { 00:13:53.264 "params": { 00:13:53.264 "name": "Nvme$subsystem", 00:13:53.264 "trtype": "$TEST_TRANSPORT", 00:13:53.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.264 "adrfam": "ipv4", 00:13:53.264 "trsvcid": "$NVMF_PORT", 00:13:53.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.264 "hdgst": ${hdgst:-false}, 00:13:53.264 "ddgst": ${ddgst:-false} 00:13:53.264 }, 00:13:53.264 "method": "bdev_nvme_attach_controller" 00:13:53.264 } 00:13:53.264 EOF 00:13:53.264 )") 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:53.264 21:53:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.264 "params": { 00:13:53.264 "name": "Nvme1", 00:13:53.264 "trtype": "tcp", 00:13:53.264 "traddr": "10.0.0.2", 00:13:53.264 "adrfam": "ipv4", 00:13:53.264 "trsvcid": "4420", 00:13:53.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.264 "hdgst": false, 00:13:53.264 "ddgst": false 00:13:53.264 }, 00:13:53.264 "method": "bdev_nvme_attach_controller" 00:13:53.264 }' 00:13:53.264 [2024-07-24 21:53:58.796326] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:53.264 [2024-07-24 21:53:58.796420] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84698 ] 00:13:53.264 [2024-07-24 21:53:58.937146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:53.522 [2024-07-24 21:53:59.056036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.522 [2024-07-24 21:53:59.056205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.522 [2024-07-24 21:53:59.056206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.522 [2024-07-24 21:53:59.070631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.522 I/O targets: 00:13:53.522 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:53.522 00:13:53.522 00:13:53.522 CUnit - A unit testing framework for C - Version 2.1-3 00:13:53.522 http://cunit.sourceforge.net/ 00:13:53.522 00:13:53.522 00:13:53.522 Suite: bdevio tests on: Nvme1n1 00:13:53.522 Test: blockdev write read block ...passed 00:13:53.522 Test: blockdev write zeroes read block ...passed 00:13:53.779 Test: blockdev write zeroes read no split ...passed 00:13:53.779 Test: blockdev write zeroes read split ...passed 00:13:53.779 Test: blockdev write zeroes read split partial ...passed 00:13:53.779 Test: blockdev reset ...[2024-07-24 21:53:59.271057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:53.779 [2024-07-24 21:53:59.271195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428490 (9): Bad file descriptor 00:13:53.779 [2024-07-24 21:53:59.288030] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.779 passed 00:13:53.779 Test: blockdev write read 8 blocks ...passed 00:13:53.779 Test: blockdev write read size > 128k ...passed 00:13:53.779 Test: blockdev write read invalid size ...passed 00:13:53.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:53.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:53.779 Test: blockdev write read max offset ...passed 00:13:53.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:53.779 Test: blockdev writev readv 8 blocks ...passed 00:13:53.779 Test: blockdev writev readv 30 x 1block ...passed 00:13:53.779 Test: blockdev writev readv block ...passed 00:13:53.779 Test: blockdev writev readv size > 128k ...passed 00:13:53.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:53.779 Test: blockdev comparev and writev ...[2024-07-24 21:53:59.296210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.296279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.296291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.296653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.296679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.296697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.296707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.297184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.297216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.297234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.297245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.297682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.297713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.297731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:53.779 [2024-07-24 21:53:59.297742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:53.779 passed 00:13:53.779 Test: blockdev nvme passthru rw ...passed 00:13:53.779 Test: blockdev nvme passthru vendor specific ...[2024-07-24 21:53:59.298520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.779 [2024-07-24 21:53:59.298546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.298682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.779 [2024-07-24 21:53:59.298704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.298813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.779 [2024-07-24 21:53:59.298830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:53.779 [2024-07-24 21:53:59.298936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:53.779 [2024-07-24 21:53:59.298961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:53.779 passed 00:13:53.779 Test: blockdev nvme admin passthru ...passed 00:13:53.779 Test: blockdev copy ...passed 00:13:53.779 00:13:53.779 Run Summary: Type Total Ran Passed Failed Inactive 00:13:53.779 suites 1 1 n/a 0 0 00:13:53.779 tests 23 23 23 0 0 00:13:53.779 asserts 152 152 152 0 n/a 00:13:53.779 00:13:53.779 Elapsed time = 0.166 seconds 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.037 rmmod nvme_tcp 00:13:54.037 rmmod nvme_fabrics 00:13:54.037 rmmod nvme_keyring 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 84662 ']' 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 84662 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 84662 ']' 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 84662 00:13:54.037 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84662 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84662' 00:13:54.295 killing process with pid 84662 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 84662 00:13:54.295 21:53:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 84662 00:13:54.554 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:54.555 00:13:54.555 real 0m3.035s 00:13:54.555 user 0m10.030s 00:13:54.555 sys 0m1.242s 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.555 ************************************ 00:13:54.555 END TEST nvmf_bdevio_no_huge 00:13:54.555 ************************************ 00:13:54.555 21:54:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.555 21:54:00 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:54.555 21:54:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.555 21:54:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.555 21:54:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.555 ************************************ 00:13:54.555 START TEST nvmf_tls 00:13:54.555 ************************************ 00:13:54.555 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:54.813 * Looking for test storage... 00:13:54.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.813 21:54:00 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:54.814 Cannot find device "nvmf_tgt_br" 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.814 Cannot find device "nvmf_tgt_br2" 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:54.814 Cannot find device "nvmf_tgt_br" 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:54.814 Cannot find device "nvmf_tgt_br2" 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.814 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:55.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:13:55.073 00:13:55.073 --- 10.0.0.2 ping statistics --- 00:13:55.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.073 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:55.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:55.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:55.073 00:13:55.073 --- 10.0.0.3 ping statistics --- 00:13:55.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.073 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:55.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:55.073 00:13:55.073 --- 10.0.0.1 ping statistics --- 00:13:55.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.073 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84873 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84873 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84873 ']' 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.073 21:54:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.073 [2024-07-24 21:54:00.762774] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:55.073 [2024-07-24 21:54:00.763541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.331 [2024-07-24 21:54:00.906201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.331 [2024-07-24 21:54:01.015018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.331 [2024-07-24 21:54:01.015087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.331 [2024-07-24 21:54:01.015116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.331 [2024-07-24 21:54:01.015126] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.331 [2024-07-24 21:54:01.015135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.331 [2024-07-24 21:54:01.015180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:56.265 21:54:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:56.523 true 00:13:56.523 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:56.523 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.781 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:56.781 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:56.781 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:57.039 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.039 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:57.297 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:57.297 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:57.297 21:54:02 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:57.555 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.555 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:57.812 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:57.812 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:57.812 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.812 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:58.069 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:58.069 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:58.069 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:58.326 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:58.326 21:54:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:58.584 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:58.584 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:58.584 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:58.584 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:58.584 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:58.842 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3QPn3cKz19 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.fhx5PdlXe6 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3QPn3cKz19 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fhx5PdlXe6 00:13:59.100 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:59.358 21:54:04 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:59.616 [2024-07-24 21:54:05.178371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:59.617 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3QPn3cKz19 00:13:59.617 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3QPn3cKz19 00:13:59.617 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:59.875 [2024-07-24 21:54:05.434503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.875 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:00.134 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:00.392 [2024-07-24 21:54:05.934698] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.392 [2024-07-24 21:54:05.934930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.392 21:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:00.651 malloc0 00:14:00.651 21:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:00.909 21:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3QPn3cKz19 00:14:01.168 [2024-07-24 21:54:06.642261] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:01.168 21:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3QPn3cKz19 00:14:11.139 Initializing NVMe Controllers 00:14:11.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:11.139 Initialization complete. Launching workers. 00:14:11.139 ======================================================== 00:14:11.139 Latency(us) 00:14:11.139 Device Information : IOPS MiB/s Average min max 00:14:11.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9628.98 37.61 6648.31 1258.86 8054.74 00:14:11.139 ======================================================== 00:14:11.139 Total : 9628.98 37.61 6648.31 1258.86 8054.74 00:14:11.139 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3QPn3cKz19 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3QPn3cKz19' 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85104 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85104 /var/tmp/bdevperf.sock 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85104 ']' 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.139 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.415 21:54:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.415 [2024-07-24 21:54:16.907060] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:11.415 [2024-07-24 21:54:16.907158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85104 ] 00:14:11.415 [2024-07-24 21:54:17.048867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.673 [2024-07-24 21:54:17.147320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.673 [2024-07-24 21:54:17.206153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.240 21:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.240 21:54:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:12.240 21:54:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3QPn3cKz19 00:14:12.498 [2024-07-24 21:54:18.109283] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:12.498 [2024-07-24 21:54:18.109390] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.498 TLSTESTn1 00:14:12.498 21:54:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:12.756 Running I/O for 10 seconds... 00:14:22.754 00:14:22.755 Latency(us) 00:14:22.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:22.755 Verification LBA range: start 0x0 length 0x2000 00:14:22.755 TLSTESTn1 : 10.02 4050.45 15.82 0.00 0.00 31541.18 6791.91 25618.62 00:14:22.755 =================================================================================================================== 00:14:22.755 Total : 4050.45 15.82 0.00 0.00 31541.18 6791.91 25618.62 00:14:22.755 0 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85104 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85104 ']' 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85104 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85104 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85104' 00:14:22.755 killing process with pid 85104 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85104 00:14:22.755 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.755 00:14:22.755 Latency(us) 00:14:22.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.755 =================================================================================================================== 00:14:22.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.755 [2024-07-24 21:54:28.363330] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.755 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85104 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fhx5PdlXe6 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fhx5PdlXe6 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fhx5PdlXe6 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fhx5PdlXe6' 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85243 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85243 /var/tmp/bdevperf.sock 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85243 ']' 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.013 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.013 [2024-07-24 21:54:28.618515] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:23.013 [2024-07-24 21:54:28.618605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85243 ] 00:14:23.272 [2024-07-24 21:54:28.750773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.272 [2024-07-24 21:54:28.836750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.272 [2024-07-24 21:54:28.890597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.272 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.272 21:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:23.272 21:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fhx5PdlXe6 00:14:23.530 [2024-07-24 21:54:29.200725] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.530 [2024-07-24 21:54:29.200848] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.530 [2024-07-24 21:54:29.205750] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:23.530 [2024-07-24 21:54:29.206302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63c830 (107): Transport endpoint is not connected 00:14:23.530 [2024-07-24 21:54:29.207290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63c830 (9): Bad file descriptor 00:14:23.530 [2024-07-24 21:54:29.208285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:23.530 [2024-07-24 21:54:29.208308] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:23.530 [2024-07-24 21:54:29.208338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:23.530 request: 00:14:23.530 { 00:14:23.530 "name": "TLSTEST", 00:14:23.530 "trtype": "tcp", 00:14:23.530 "traddr": "10.0.0.2", 00:14:23.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.530 "adrfam": "ipv4", 00:14:23.530 "trsvcid": "4420", 00:14:23.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.530 "psk": "/tmp/tmp.fhx5PdlXe6", 00:14:23.530 "method": "bdev_nvme_attach_controller", 00:14:23.530 "req_id": 1 00:14:23.530 } 00:14:23.530 Got JSON-RPC error response 00:14:23.530 response: 00:14:23.530 { 00:14:23.530 "code": -5, 00:14:23.530 "message": "Input/output error" 00:14:23.530 } 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85243 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85243 ']' 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85243 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:23.530 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85243 00:14:23.789 killing process with pid 85243 00:14:23.789 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.789 00:14:23.789 Latency(us) 00:14:23.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.789 =================================================================================================================== 00:14:23.789 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85243' 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85243 00:14:23.789 [2024-07-24 21:54:29.248753] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85243 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3QPn3cKz19 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3QPn3cKz19 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3QPn3cKz19 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3QPn3cKz19' 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.789 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85256 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85256 /var/tmp/bdevperf.sock 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85256 ']' 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.790 21:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.049 [2024-07-24 21:54:29.516327] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:24.049 [2024-07-24 21:54:29.516819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85256 ] 00:14:24.049 [2024-07-24 21:54:29.665542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.049 [2024-07-24 21:54:29.753422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.307 [2024-07-24 21:54:29.810199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.873 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.873 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:24.873 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3QPn3cKz19 00:14:25.132 [2024-07-24 21:54:30.700581] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.132 [2024-07-24 21:54:30.700720] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:25.132 [2024-07-24 21:54:30.705551] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:25.132 [2024-07-24 21:54:30.705592] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:25.132 [2024-07-24 21:54:30.705654] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:25.132 [2024-07-24 21:54:30.706277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211830 (107): Transport endpoint is not connected 00:14:25.132 [2024-07-24 21:54:30.707264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211830 (9): Bad file descriptor 00:14:25.132 [2024-07-24 21:54:30.708260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:25.132 [2024-07-24 21:54:30.708283] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:25.132 [2024-07-24 21:54:30.708315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:25.132 request: 00:14:25.132 { 00:14:25.132 "name": "TLSTEST", 00:14:25.132 "trtype": "tcp", 00:14:25.132 "traddr": "10.0.0.2", 00:14:25.132 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:25.132 "adrfam": "ipv4", 00:14:25.132 "trsvcid": "4420", 00:14:25.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.132 "psk": "/tmp/tmp.3QPn3cKz19", 00:14:25.132 "method": "bdev_nvme_attach_controller", 00:14:25.132 "req_id": 1 00:14:25.132 } 00:14:25.132 Got JSON-RPC error response 00:14:25.132 response: 00:14:25.132 { 00:14:25.132 "code": -5, 00:14:25.132 "message": "Input/output error" 00:14:25.132 } 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85256 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85256 ']' 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85256 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85256 00:14:25.132 killing process with pid 85256 00:14:25.132 Received shutdown signal, test time was about 10.000000 seconds 00:14:25.132 00:14:25.132 Latency(us) 00:14:25.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.132 =================================================================================================================== 00:14:25.132 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85256' 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85256 00:14:25.132 [2024-07-24 21:54:30.750441] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:25.132 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85256 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3QPn3cKz19 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3QPn3cKz19 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3QPn3cKz19 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3QPn3cKz19' 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85285 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85285 /var/tmp/bdevperf.sock 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85285 ']' 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.391 21:54:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.391 [2024-07-24 21:54:31.013487] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:25.391 [2024-07-24 21:54:31.013849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85285 ] 00:14:25.650 [2024-07-24 21:54:31.148565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.650 [2024-07-24 21:54:31.231856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.650 [2024-07-24 21:54:31.286841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.586 21:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.586 21:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:26.586 21:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3QPn3cKz19 00:14:26.586 [2024-07-24 21:54:32.210779] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.586 [2024-07-24 21:54:32.210921] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:26.586 [2024-07-24 21:54:32.215789] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:26.586 [2024-07-24 21:54:32.215845] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:26.586 [2024-07-24 21:54:32.215895] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:26.586 [2024-07-24 21:54:32.216496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d830 (107): Transport endpoint is not connected 00:14:26.586 [2024-07-24 21:54:32.217489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d830 (9): Bad file descriptor 00:14:26.586 [2024-07-24 21:54:32.218485] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:26.586 [2024-07-24 21:54:32.218508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:26.586 [2024-07-24 21:54:32.218539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:26.586 request: 00:14:26.586 { 00:14:26.586 "name": "TLSTEST", 00:14:26.586 "trtype": "tcp", 00:14:26.586 "traddr": "10.0.0.2", 00:14:26.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.586 "adrfam": "ipv4", 00:14:26.586 "trsvcid": "4420", 00:14:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:26.586 "psk": "/tmp/tmp.3QPn3cKz19", 00:14:26.586 "method": "bdev_nvme_attach_controller", 00:14:26.586 "req_id": 1 00:14:26.586 } 00:14:26.586 Got JSON-RPC error response 00:14:26.586 response: 00:14:26.586 { 00:14:26.586 "code": -5, 00:14:26.586 "message": "Input/output error" 00:14:26.586 } 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85285 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85285 ']' 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85285 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85285 00:14:26.586 killing process with pid 85285 00:14:26.586 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.586 00:14:26.586 Latency(us) 00:14:26.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.586 =================================================================================================================== 00:14:26.586 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85285' 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85285 00:14:26.586 [2024-07-24 21:54:32.264327] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:26.586 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85285 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85307 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85307 /var/tmp/bdevperf.sock 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85307 ']' 00:14:26.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:26.845 21:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.845 [2024-07-24 21:54:32.529458] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:26.845 [2024-07-24 21:54:32.529563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85307 ] 00:14:27.104 [2024-07-24 21:54:32.669097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.104 [2024-07-24 21:54:32.759467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.104 [2024-07-24 21:54:32.813327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:27.741 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:27.741 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:27.741 21:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:27.999 [2024-07-24 21:54:33.714022] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:27.999 [2024-07-24 21:54:33.715766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141c020 (9): Bad file descriptor 00:14:28.257 [2024-07-24 21:54:33.716761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:28.257 [2024-07-24 21:54:33.716903] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:28.257 [2024-07-24 21:54:33.716927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:28.257 request: 00:14:28.257 { 00:14:28.257 "name": "TLSTEST", 00:14:28.257 "trtype": "tcp", 00:14:28.257 "traddr": "10.0.0.2", 00:14:28.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.257 "adrfam": "ipv4", 00:14:28.257 "trsvcid": "4420", 00:14:28.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.257 "method": "bdev_nvme_attach_controller", 00:14:28.257 "req_id": 1 00:14:28.257 } 00:14:28.257 Got JSON-RPC error response 00:14:28.257 response: 00:14:28.257 { 00:14:28.257 "code": -5, 00:14:28.257 "message": "Input/output error" 00:14:28.257 } 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85307 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85307 ']' 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85307 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85307 00:14:28.257 killing process with pid 85307 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:28.257 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85307' 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85307 00:14:28.258 Received shutdown signal, test time was about 10.000000 seconds 00:14:28.258 00:14:28.258 Latency(us) 00:14:28.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.258 =================================================================================================================== 00:14:28.258 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85307 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 84873 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84873 ']' 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84873 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.258 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84873 00:14:28.516 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:28.516 killing process with pid 84873 00:14:28.516 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:28.516 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84873' 00:14:28.516 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84873 00:14:28.516 [2024-07-24 21:54:33.987944] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:28.516 21:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84873 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:28.516 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:28.774 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:28.774 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:28.774 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.2SnfbF5pTF 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.2SnfbF5pTF 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85350 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85350 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85350 ']' 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:28.775 21:54:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.775 [2024-07-24 21:54:34.307278] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:28.775 [2024-07-24 21:54:34.307355] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.775 [2024-07-24 21:54:34.439804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.032 [2024-07-24 21:54:34.528959] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.032 [2024-07-24 21:54:34.529204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.032 [2024-07-24 21:54:34.529286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.032 [2024-07-24 21:54:34.529362] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.032 [2024-07-24 21:54:34.529437] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.032 [2024-07-24 21:54:34.529564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.033 [2024-07-24 21:54:34.582704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2SnfbF5pTF 00:14:29.598 21:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.855 [2024-07-24 21:54:35.518164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.855 21:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:30.112 21:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:30.370 [2024-07-24 21:54:36.006249] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.370 [2024-07-24 21:54:36.006803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.370 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:30.628 malloc0 00:14:30.628 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:30.885 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:31.143 [2024-07-24 21:54:36.730629] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2SnfbF5pTF 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2SnfbF5pTF' 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85405 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85405 /var/tmp/bdevperf.sock 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85405 ']' 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.143 21:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.143 [2024-07-24 21:54:36.803331] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:31.143 [2024-07-24 21:54:36.803441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85405 ] 00:14:31.401 [2024-07-24 21:54:36.945298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.401 [2024-07-24 21:54:37.031971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.401 [2024-07-24 21:54:37.091567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.335 21:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:32.335 21:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:32.335 21:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:32.335 [2024-07-24 21:54:37.897041] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.335 [2024-07-24 21:54:37.897838] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:32.335 TLSTESTn1 00:14:32.335 21:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:32.592 Running I/O for 10 seconds... 00:14:42.566 00:14:42.566 Latency(us) 00:14:42.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.566 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:42.566 Verification LBA range: start 0x0 length 0x2000 00:14:42.566 TLSTESTn1 : 10.02 4196.17 16.39 0.00 0.00 30444.77 6583.39 23473.80 00:14:42.566 =================================================================================================================== 00:14:42.566 Total : 4196.17 16.39 0.00 0.00 30444.77 6583.39 23473.80 00:14:42.566 0 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85405 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85405 ']' 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85405 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85405 00:14:42.566 killing process with pid 85405 00:14:42.566 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.566 00:14:42.566 Latency(us) 00:14:42.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.566 =================================================================================================================== 00:14:42.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85405' 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85405 00:14:42.566 [2024-07-24 21:54:48.145774] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:42.566 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85405 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.2SnfbF5pTF 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2SnfbF5pTF 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2SnfbF5pTF 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:42.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2SnfbF5pTF 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2SnfbF5pTF' 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85534 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85534 /var/tmp/bdevperf.sock 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85534 ']' 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.825 21:54:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.825 [2024-07-24 21:54:48.417890] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:42.825 [2024-07-24 21:54:48.418232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85534 ] 00:14:43.084 [2024-07-24 21:54:48.557561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.084 [2024-07-24 21:54:48.649258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.084 [2024-07-24 21:54:48.703790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:43.650 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:43.650 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:43.650 21:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:43.908 [2024-07-24 21:54:49.567121] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.908 [2024-07-24 21:54:49.567974] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:43.908 [2024-07-24 21:54:49.568232] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.2SnfbF5pTF 00:14:43.908 request: 00:14:43.908 { 00:14:43.908 "name": "TLSTEST", 00:14:43.908 "trtype": "tcp", 00:14:43.908 "traddr": "10.0.0.2", 00:14:43.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.908 "adrfam": "ipv4", 00:14:43.908 "trsvcid": "4420", 00:14:43.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.908 "psk": "/tmp/tmp.2SnfbF5pTF", 00:14:43.908 "method": "bdev_nvme_attach_controller", 00:14:43.908 "req_id": 1 00:14:43.908 } 00:14:43.908 Got JSON-RPC error response 00:14:43.908 response: 00:14:43.908 { 00:14:43.908 "code": -1, 00:14:43.908 "message": "Operation not permitted" 00:14:43.908 } 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85534 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85534 ']' 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85534 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85534 00:14:43.908 killing process with pid 85534 00:14:43.908 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.908 00:14:43.908 Latency(us) 00:14:43.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.908 =================================================================================================================== 00:14:43.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85534' 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85534 00:14:43.908 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85534 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 85350 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85350 ']' 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85350 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85350 00:14:44.189 killing process with pid 85350 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85350' 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85350 00:14:44.189 [2024-07-24 21:54:49.845351] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:44.189 21:54:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85350 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85571 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85571 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85571 ']' 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:44.447 21:54:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.447 [2024-07-24 21:54:50.157804] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:44.447 [2024-07-24 21:54:50.158435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.704 [2024-07-24 21:54:50.307353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.704 [2024-07-24 21:54:50.401966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.704 [2024-07-24 21:54:50.402019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.704 [2024-07-24 21:54:50.402047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.704 [2024-07-24 21:54:50.402055] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.704 [2024-07-24 21:54:50.402062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.704 [2024-07-24 21:54:50.402090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.961 [2024-07-24 21:54:50.457035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2SnfbF5pTF 00:14:45.528 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:45.786 [2024-07-24 21:54:51.298931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.786 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:46.044 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:46.302 [2024-07-24 21:54:51.811047] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:46.302 [2024-07-24 21:54:51.811264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.302 21:54:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:46.559 malloc0 00:14:46.559 21:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:46.816 21:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:47.074 [2024-07-24 21:54:52.542904] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:47.074 [2024-07-24 21:54:52.542949] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:47.074 [2024-07-24 21:54:52.543006] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:47.074 request: 00:14:47.074 { 00:14:47.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.074 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.074 "psk": "/tmp/tmp.2SnfbF5pTF", 00:14:47.074 "method": "nvmf_subsystem_add_host", 00:14:47.074 "req_id": 1 00:14:47.074 } 00:14:47.074 Got JSON-RPC error response 00:14:47.074 response: 00:14:47.074 { 00:14:47.074 "code": -32603, 00:14:47.074 "message": "Internal error" 00:14:47.074 } 00:14:47.074 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:47.074 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 85571 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85571 ']' 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85571 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85571 00:14:47.075 killing process with pid 85571 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85571' 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85571 00:14:47.075 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85571 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.2SnfbF5pTF 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85629 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85629 00:14:47.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85629 ']' 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:47.333 21:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.333 [2024-07-24 21:54:52.874716] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:47.333 [2024-07-24 21:54:52.875020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.333 [2024-07-24 21:54:53.015177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.591 [2024-07-24 21:54:53.096924] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.591 [2024-07-24 21:54:53.097294] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.591 [2024-07-24 21:54:53.097436] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.591 [2024-07-24 21:54:53.097464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.592 [2024-07-24 21:54:53.097472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.592 [2024-07-24 21:54:53.097509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.592 [2024-07-24 21:54:53.155755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2SnfbF5pTF 00:14:48.158 21:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:48.439 [2024-07-24 21:54:54.053049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.439 21:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:48.697 21:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:48.955 [2024-07-24 21:54:54.525158] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:48.955 [2024-07-24 21:54:54.525378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.955 21:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:49.213 malloc0 00:14:49.213 21:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:49.471 21:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:49.730 [2024-07-24 21:54:55.252482] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:49.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85685 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85685 /var/tmp/bdevperf.sock 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85685 ']' 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.730 21:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.730 [2024-07-24 21:54:55.321054] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:49.730 [2024-07-24 21:54:55.321339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85685 ] 00:14:49.989 [2024-07-24 21:54:55.458485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.989 [2024-07-24 21:54:55.539653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.989 [2024-07-24 21:54:55.597095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.556 21:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.556 21:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:50.556 21:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:14:50.814 [2024-07-24 21:54:56.429062] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.814 [2024-07-24 21:54:56.429432] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:50.814 TLSTESTn1 00:14:50.814 21:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:51.381 21:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:51.381 "subsystems": [ 00:14:51.381 { 00:14:51.381 "subsystem": "keyring", 00:14:51.381 "config": [] 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "subsystem": "iobuf", 00:14:51.381 "config": [ 00:14:51.381 { 00:14:51.381 "method": "iobuf_set_options", 00:14:51.381 "params": { 00:14:51.381 "small_pool_count": 8192, 00:14:51.381 "large_pool_count": 1024, 00:14:51.381 "small_bufsize": 8192, 00:14:51.381 "large_bufsize": 135168 00:14:51.381 } 00:14:51.381 } 00:14:51.381 ] 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "subsystem": "sock", 00:14:51.381 "config": [ 00:14:51.381 { 00:14:51.381 "method": "sock_set_default_impl", 00:14:51.381 "params": { 00:14:51.381 "impl_name": "uring" 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "sock_impl_set_options", 00:14:51.381 "params": { 00:14:51.381 "impl_name": "ssl", 00:14:51.381 "recv_buf_size": 4096, 00:14:51.381 "send_buf_size": 4096, 00:14:51.381 "enable_recv_pipe": true, 00:14:51.381 "enable_quickack": false, 00:14:51.381 "enable_placement_id": 0, 00:14:51.381 "enable_zerocopy_send_server": true, 00:14:51.381 "enable_zerocopy_send_client": false, 00:14:51.381 "zerocopy_threshold": 0, 00:14:51.381 "tls_version": 0, 00:14:51.381 "enable_ktls": false 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "sock_impl_set_options", 00:14:51.381 "params": { 00:14:51.381 "impl_name": "posix", 00:14:51.381 "recv_buf_size": 2097152, 00:14:51.381 "send_buf_size": 2097152, 00:14:51.381 "enable_recv_pipe": true, 00:14:51.381 "enable_quickack": false, 00:14:51.381 "enable_placement_id": 0, 00:14:51.381 "enable_zerocopy_send_server": true, 00:14:51.381 "enable_zerocopy_send_client": false, 00:14:51.381 "zerocopy_threshold": 0, 00:14:51.381 "tls_version": 0, 00:14:51.381 "enable_ktls": false 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "sock_impl_set_options", 00:14:51.381 "params": { 00:14:51.381 "impl_name": "uring", 00:14:51.381 "recv_buf_size": 2097152, 00:14:51.381 "send_buf_size": 2097152, 00:14:51.381 "enable_recv_pipe": true, 00:14:51.381 "enable_quickack": false, 00:14:51.381 "enable_placement_id": 0, 00:14:51.381 "enable_zerocopy_send_server": false, 00:14:51.381 "enable_zerocopy_send_client": false, 00:14:51.381 "zerocopy_threshold": 0, 00:14:51.381 "tls_version": 0, 00:14:51.381 "enable_ktls": false 00:14:51.381 } 00:14:51.381 } 00:14:51.381 ] 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "subsystem": "vmd", 00:14:51.381 "config": [] 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "subsystem": "accel", 00:14:51.381 "config": [ 00:14:51.381 { 00:14:51.381 "method": "accel_set_options", 00:14:51.381 "params": { 00:14:51.381 "small_cache_size": 128, 00:14:51.381 "large_cache_size": 16, 00:14:51.381 "task_count": 2048, 00:14:51.381 "sequence_count": 2048, 00:14:51.381 "buf_count": 2048 00:14:51.381 } 00:14:51.381 } 00:14:51.381 ] 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "subsystem": "bdev", 00:14:51.381 "config": [ 00:14:51.381 { 00:14:51.381 "method": "bdev_set_options", 00:14:51.381 "params": { 00:14:51.381 "bdev_io_pool_size": 65535, 00:14:51.381 "bdev_io_cache_size": 256, 00:14:51.381 "bdev_auto_examine": true, 00:14:51.381 "iobuf_small_cache_size": 128, 00:14:51.381 "iobuf_large_cache_size": 16 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "bdev_raid_set_options", 00:14:51.381 "params": { 00:14:51.381 "process_window_size_kb": 1024 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "bdev_iscsi_set_options", 00:14:51.381 "params": { 00:14:51.381 "timeout_sec": 30 00:14:51.381 } 00:14:51.381 }, 00:14:51.381 { 00:14:51.381 "method": "bdev_nvme_set_options", 00:14:51.381 "params": { 00:14:51.381 "action_on_timeout": "none", 00:14:51.381 "timeout_us": 0, 00:14:51.381 "timeout_admin_us": 0, 00:14:51.381 "keep_alive_timeout_ms": 10000, 00:14:51.381 "arbitration_burst": 0, 00:14:51.381 "low_priority_weight": 0, 00:14:51.381 "medium_priority_weight": 0, 00:14:51.381 "high_priority_weight": 0, 00:14:51.381 "nvme_adminq_poll_period_us": 10000, 00:14:51.381 "nvme_ioq_poll_period_us": 0, 00:14:51.381 "io_queue_requests": 0, 00:14:51.381 "delay_cmd_submit": true, 00:14:51.381 "transport_retry_count": 4, 00:14:51.381 "bdev_retry_count": 3, 00:14:51.381 "transport_ack_timeout": 0, 00:14:51.381 "ctrlr_loss_timeout_sec": 0, 00:14:51.381 "reconnect_delay_sec": 0, 00:14:51.381 "fast_io_fail_timeout_sec": 0, 00:14:51.382 "disable_auto_failback": false, 00:14:51.382 "generate_uuids": false, 00:14:51.382 "transport_tos": 0, 00:14:51.382 "nvme_error_stat": false, 00:14:51.382 "rdma_srq_size": 0, 00:14:51.382 "io_path_stat": false, 00:14:51.382 "allow_accel_sequence": false, 00:14:51.382 "rdma_max_cq_size": 0, 00:14:51.382 "rdma_cm_event_timeout_ms": 0, 00:14:51.382 "dhchap_digests": [ 00:14:51.382 "sha256", 00:14:51.382 "sha384", 00:14:51.382 "sha512" 00:14:51.382 ], 00:14:51.382 "dhchap_dhgroups": [ 00:14:51.382 "null", 00:14:51.382 "ffdhe2048", 00:14:51.382 "ffdhe3072", 00:14:51.382 "ffdhe4096", 00:14:51.382 "ffdhe6144", 00:14:51.382 "ffdhe8192" 00:14:51.382 ] 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "bdev_nvme_set_hotplug", 00:14:51.382 "params": { 00:14:51.382 "period_us": 100000, 00:14:51.382 "enable": false 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "bdev_malloc_create", 00:14:51.382 "params": { 00:14:51.382 "name": "malloc0", 00:14:51.382 "num_blocks": 8192, 00:14:51.382 "block_size": 4096, 00:14:51.382 "physical_block_size": 4096, 00:14:51.382 "uuid": "29e5b908-1710-40b7-af98-8ff6e84d6263", 00:14:51.382 "optimal_io_boundary": 0 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "bdev_wait_for_examine" 00:14:51.382 } 00:14:51.382 ] 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "subsystem": "nbd", 00:14:51.382 "config": [] 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "subsystem": "scheduler", 00:14:51.382 "config": [ 00:14:51.382 { 00:14:51.382 "method": "framework_set_scheduler", 00:14:51.382 "params": { 00:14:51.382 "name": "static" 00:14:51.382 } 00:14:51.382 } 00:14:51.382 ] 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "subsystem": "nvmf", 00:14:51.382 "config": [ 00:14:51.382 { 00:14:51.382 "method": "nvmf_set_config", 00:14:51.382 "params": { 00:14:51.382 "discovery_filter": "match_any", 00:14:51.382 "admin_cmd_passthru": { 00:14:51.382 "identify_ctrlr": false 00:14:51.382 } 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_set_max_subsystems", 00:14:51.382 "params": { 00:14:51.382 "max_subsystems": 1024 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_set_crdt", 00:14:51.382 "params": { 00:14:51.382 "crdt1": 0, 00:14:51.382 "crdt2": 0, 00:14:51.382 "crdt3": 0 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_create_transport", 00:14:51.382 "params": { 00:14:51.382 "trtype": "TCP", 00:14:51.382 "max_queue_depth": 128, 00:14:51.382 "max_io_qpairs_per_ctrlr": 127, 00:14:51.382 "in_capsule_data_size": 4096, 00:14:51.382 "max_io_size": 131072, 00:14:51.382 "io_unit_size": 131072, 00:14:51.382 "max_aq_depth": 128, 00:14:51.382 "num_shared_buffers": 511, 00:14:51.382 "buf_cache_size": 4294967295, 00:14:51.382 "dif_insert_or_strip": false, 00:14:51.382 "zcopy": false, 00:14:51.382 "c2h_success": false, 00:14:51.382 "sock_priority": 0, 00:14:51.382 "abort_timeout_sec": 1, 00:14:51.382 "ack_timeout": 0, 00:14:51.382 "data_wr_pool_size": 0 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_create_subsystem", 00:14:51.382 "params": { 00:14:51.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.382 "allow_any_host": false, 00:14:51.382 "serial_number": "SPDK00000000000001", 00:14:51.382 "model_number": "SPDK bdev Controller", 00:14:51.382 "max_namespaces": 10, 00:14:51.382 "min_cntlid": 1, 00:14:51.382 "max_cntlid": 65519, 00:14:51.382 "ana_reporting": false 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_subsystem_add_host", 00:14:51.382 "params": { 00:14:51.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.382 "host": "nqn.2016-06.io.spdk:host1", 00:14:51.382 "psk": "/tmp/tmp.2SnfbF5pTF" 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_subsystem_add_ns", 00:14:51.382 "params": { 00:14:51.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.382 "namespace": { 00:14:51.382 "nsid": 1, 00:14:51.382 "bdev_name": "malloc0", 00:14:51.382 "nguid": "29E5B908171040B7AF988FF6E84D6263", 00:14:51.382 "uuid": "29e5b908-1710-40b7-af98-8ff6e84d6263", 00:14:51.382 "no_auto_visible": false 00:14:51.382 } 00:14:51.382 } 00:14:51.382 }, 00:14:51.382 { 00:14:51.382 "method": "nvmf_subsystem_add_listener", 00:14:51.382 "params": { 00:14:51.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.382 "listen_address": { 00:14:51.382 "trtype": "TCP", 00:14:51.382 "adrfam": "IPv4", 00:14:51.382 "traddr": "10.0.0.2", 00:14:51.382 "trsvcid": "4420" 00:14:51.382 }, 00:14:51.382 "secure_channel": true 00:14:51.382 } 00:14:51.382 } 00:14:51.382 ] 00:14:51.382 } 00:14:51.382 ] 00:14:51.382 }' 00:14:51.382 21:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:51.641 21:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:51.641 "subsystems": [ 00:14:51.641 { 00:14:51.641 "subsystem": "keyring", 00:14:51.641 "config": [] 00:14:51.641 }, 00:14:51.641 { 00:14:51.641 "subsystem": "iobuf", 00:14:51.641 "config": [ 00:14:51.641 { 00:14:51.641 "method": "iobuf_set_options", 00:14:51.641 "params": { 00:14:51.641 "small_pool_count": 8192, 00:14:51.641 "large_pool_count": 1024, 00:14:51.641 "small_bufsize": 8192, 00:14:51.641 "large_bufsize": 135168 00:14:51.641 } 00:14:51.641 } 00:14:51.641 ] 00:14:51.641 }, 00:14:51.641 { 00:14:51.641 "subsystem": "sock", 00:14:51.641 "config": [ 00:14:51.641 { 00:14:51.641 "method": "sock_set_default_impl", 00:14:51.641 "params": { 00:14:51.641 "impl_name": "uring" 00:14:51.641 } 00:14:51.641 }, 00:14:51.641 { 00:14:51.641 "method": "sock_impl_set_options", 00:14:51.641 "params": { 00:14:51.641 "impl_name": "ssl", 00:14:51.641 "recv_buf_size": 4096, 00:14:51.641 "send_buf_size": 4096, 00:14:51.641 "enable_recv_pipe": true, 00:14:51.641 "enable_quickack": false, 00:14:51.641 "enable_placement_id": 0, 00:14:51.641 "enable_zerocopy_send_server": true, 00:14:51.641 "enable_zerocopy_send_client": false, 00:14:51.641 "zerocopy_threshold": 0, 00:14:51.641 "tls_version": 0, 00:14:51.641 "enable_ktls": false 00:14:51.641 } 00:14:51.641 }, 00:14:51.641 { 00:14:51.642 "method": "sock_impl_set_options", 00:14:51.642 "params": { 00:14:51.642 "impl_name": "posix", 00:14:51.642 "recv_buf_size": 2097152, 00:14:51.642 "send_buf_size": 2097152, 00:14:51.642 "enable_recv_pipe": true, 00:14:51.642 "enable_quickack": false, 00:14:51.642 "enable_placement_id": 0, 00:14:51.642 "enable_zerocopy_send_server": true, 00:14:51.642 "enable_zerocopy_send_client": false, 00:14:51.642 "zerocopy_threshold": 0, 00:14:51.642 "tls_version": 0, 00:14:51.642 "enable_ktls": false 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "sock_impl_set_options", 00:14:51.642 "params": { 00:14:51.642 "impl_name": "uring", 00:14:51.642 "recv_buf_size": 2097152, 00:14:51.642 "send_buf_size": 2097152, 00:14:51.642 "enable_recv_pipe": true, 00:14:51.642 "enable_quickack": false, 00:14:51.642 "enable_placement_id": 0, 00:14:51.642 "enable_zerocopy_send_server": false, 00:14:51.642 "enable_zerocopy_send_client": false, 00:14:51.642 "zerocopy_threshold": 0, 00:14:51.642 "tls_version": 0, 00:14:51.642 "enable_ktls": false 00:14:51.642 } 00:14:51.642 } 00:14:51.642 ] 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "subsystem": "vmd", 00:14:51.642 "config": [] 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "subsystem": "accel", 00:14:51.642 "config": [ 00:14:51.642 { 00:14:51.642 "method": "accel_set_options", 00:14:51.642 "params": { 00:14:51.642 "small_cache_size": 128, 00:14:51.642 "large_cache_size": 16, 00:14:51.642 "task_count": 2048, 00:14:51.642 "sequence_count": 2048, 00:14:51.642 "buf_count": 2048 00:14:51.642 } 00:14:51.642 } 00:14:51.642 ] 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "subsystem": "bdev", 00:14:51.642 "config": [ 00:14:51.642 { 00:14:51.642 "method": "bdev_set_options", 00:14:51.642 "params": { 00:14:51.642 "bdev_io_pool_size": 65535, 00:14:51.642 "bdev_io_cache_size": 256, 00:14:51.642 "bdev_auto_examine": true, 00:14:51.642 "iobuf_small_cache_size": 128, 00:14:51.642 "iobuf_large_cache_size": 16 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_raid_set_options", 00:14:51.642 "params": { 00:14:51.642 "process_window_size_kb": 1024 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_iscsi_set_options", 00:14:51.642 "params": { 00:14:51.642 "timeout_sec": 30 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_nvme_set_options", 00:14:51.642 "params": { 00:14:51.642 "action_on_timeout": "none", 00:14:51.642 "timeout_us": 0, 00:14:51.642 "timeout_admin_us": 0, 00:14:51.642 "keep_alive_timeout_ms": 10000, 00:14:51.642 "arbitration_burst": 0, 00:14:51.642 "low_priority_weight": 0, 00:14:51.642 "medium_priority_weight": 0, 00:14:51.642 "high_priority_weight": 0, 00:14:51.642 "nvme_adminq_poll_period_us": 10000, 00:14:51.642 "nvme_ioq_poll_period_us": 0, 00:14:51.642 "io_queue_requests": 512, 00:14:51.642 "delay_cmd_submit": true, 00:14:51.642 "transport_retry_count": 4, 00:14:51.642 "bdev_retry_count": 3, 00:14:51.642 "transport_ack_timeout": 0, 00:14:51.642 "ctrlr_loss_timeout_sec": 0, 00:14:51.642 "reconnect_delay_sec": 0, 00:14:51.642 "fast_io_fail_timeout_sec": 0, 00:14:51.642 "disable_auto_failback": false, 00:14:51.642 "generate_uuids": false, 00:14:51.642 "transport_tos": 0, 00:14:51.642 "nvme_error_stat": false, 00:14:51.642 "rdma_srq_size": 0, 00:14:51.642 "io_path_stat": false, 00:14:51.642 "allow_accel_sequence": false, 00:14:51.642 "rdma_max_cq_size": 0, 00:14:51.642 "rdma_cm_event_timeout_ms": 0, 00:14:51.642 "dhchap_digests": [ 00:14:51.642 "sha256", 00:14:51.642 "sha384", 00:14:51.642 "sha512" 00:14:51.642 ], 00:14:51.642 "dhchap_dhgroups": [ 00:14:51.642 "null", 00:14:51.642 "ffdhe2048", 00:14:51.642 "ffdhe3072", 00:14:51.642 "ffdhe4096", 00:14:51.642 "ffdhe6144", 00:14:51.642 "ffdhe8192" 00:14:51.642 ] 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_nvme_attach_controller", 00:14:51.642 "params": { 00:14:51.642 "name": "TLSTEST", 00:14:51.642 "trtype": "TCP", 00:14:51.642 "adrfam": "IPv4", 00:14:51.642 "traddr": "10.0.0.2", 00:14:51.642 "trsvcid": "4420", 00:14:51.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:51.642 "prchk_reftag": false, 00:14:51.642 "prchk_guard": false, 00:14:51.642 "ctrlr_loss_timeout_sec": 0, 00:14:51.642 "reconnect_delay_sec": 0, 00:14:51.642 "fast_io_fail_timeout_sec": 0, 00:14:51.642 "psk": "/tmp/tmp.2SnfbF5pTF", 00:14:51.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:51.642 "hdgst": false, 00:14:51.642 "ddgst": false 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_nvme_set_hotplug", 00:14:51.642 "params": { 00:14:51.642 "period_us": 100000, 00:14:51.642 "enable": false 00:14:51.642 } 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "method": "bdev_wait_for_examine" 00:14:51.642 } 00:14:51.642 ] 00:14:51.642 }, 00:14:51.642 { 00:14:51.642 "subsystem": "nbd", 00:14:51.642 "config": [] 00:14:51.642 } 00:14:51.642 ] 00:14:51.642 }' 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85685 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85685 ']' 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85685 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85685 00:14:51.642 killing process with pid 85685 00:14:51.642 Received shutdown signal, test time was about 10.000000 seconds 00:14:51.642 00:14:51.642 Latency(us) 00:14:51.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.642 =================================================================================================================== 00:14:51.642 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85685' 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85685 00:14:51.642 [2024-07-24 21:54:57.226487] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:51.642 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85685 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 85629 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85629 ']' 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85629 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85629 00:14:51.900 killing process with pid 85629 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85629' 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85629 00:14:51.900 [2024-07-24 21:54:57.458573] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:51.900 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85629 00:14:52.160 21:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:52.160 21:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.160 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:52.160 21:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:52.160 "subsystems": [ 00:14:52.160 { 00:14:52.160 "subsystem": "keyring", 00:14:52.160 "config": [] 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "subsystem": "iobuf", 00:14:52.160 "config": [ 00:14:52.160 { 00:14:52.160 "method": "iobuf_set_options", 00:14:52.160 "params": { 00:14:52.160 "small_pool_count": 8192, 00:14:52.160 "large_pool_count": 1024, 00:14:52.160 "small_bufsize": 8192, 00:14:52.160 "large_bufsize": 135168 00:14:52.160 } 00:14:52.160 } 00:14:52.160 ] 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "subsystem": "sock", 00:14:52.160 "config": [ 00:14:52.160 { 00:14:52.160 "method": "sock_set_default_impl", 00:14:52.160 "params": { 00:14:52.160 "impl_name": "uring" 00:14:52.160 } 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "method": "sock_impl_set_options", 00:14:52.160 "params": { 00:14:52.160 "impl_name": "ssl", 00:14:52.160 "recv_buf_size": 4096, 00:14:52.160 "send_buf_size": 4096, 00:14:52.160 "enable_recv_pipe": true, 00:14:52.160 "enable_quickack": false, 00:14:52.160 "enable_placement_id": 0, 00:14:52.160 "enable_zerocopy_send_server": true, 00:14:52.160 "enable_zerocopy_send_client": false, 00:14:52.160 "zerocopy_threshold": 0, 00:14:52.160 "tls_version": 0, 00:14:52.160 "enable_ktls": false 00:14:52.160 } 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "method": "sock_impl_set_options", 00:14:52.160 "params": { 00:14:52.160 "impl_name": "posix", 00:14:52.160 "recv_buf_size": 2097152, 00:14:52.160 "send_buf_size": 2097152, 00:14:52.160 "enable_recv_pipe": true, 00:14:52.160 "enable_quickack": false, 00:14:52.160 "enable_placement_id": 0, 00:14:52.160 "enable_zerocopy_send_server": true, 00:14:52.160 "enable_zerocopy_send_client": false, 00:14:52.160 "zerocopy_threshold": 0, 00:14:52.160 "tls_version": 0, 00:14:52.160 "enable_ktls": false 00:14:52.160 } 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "method": "sock_impl_set_options", 00:14:52.160 "params": { 00:14:52.160 "impl_name": "uring", 00:14:52.160 "recv_buf_size": 2097152, 00:14:52.160 "send_buf_size": 2097152, 00:14:52.160 "enable_recv_pipe": true, 00:14:52.160 "enable_quickack": false, 00:14:52.160 "enable_placement_id": 0, 00:14:52.160 "enable_zerocopy_send_server": false, 00:14:52.160 "enable_zerocopy_send_client": false, 00:14:52.160 "zerocopy_threshold": 0, 00:14:52.160 "tls_version": 0, 00:14:52.160 "enable_ktls": false 00:14:52.160 } 00:14:52.160 } 00:14:52.160 ] 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "subsystem": "vmd", 00:14:52.160 "config": [] 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "subsystem": "accel", 00:14:52.160 "config": [ 00:14:52.160 { 00:14:52.160 "method": "accel_set_options", 00:14:52.160 "params": { 00:14:52.160 "small_cache_size": 128, 00:14:52.160 "large_cache_size": 16, 00:14:52.160 "task_count": 2048, 00:14:52.160 "sequence_count": 2048, 00:14:52.160 "buf_count": 2048 00:14:52.160 } 00:14:52.160 } 00:14:52.160 ] 00:14:52.160 }, 00:14:52.160 { 00:14:52.160 "subsystem": "bdev", 00:14:52.160 "config": [ 00:14:52.160 { 00:14:52.160 "method": "bdev_set_options", 00:14:52.160 "params": { 00:14:52.160 "bdev_io_pool_size": 65535, 00:14:52.160 "bdev_io_cache_size": 256, 00:14:52.160 "bdev_auto_examine": true, 00:14:52.160 "iobuf_small_cache_size": 128, 00:14:52.160 "iobuf_large_cache_size": 16 00:14:52.160 } 00:14:52.160 }, 00:14:52.160 { 00:14:52.161 "method": "bdev_raid_set_options", 00:14:52.161 "params": { 00:14:52.161 "process_window_size_kb": 1024 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "bdev_iscsi_set_options", 00:14:52.161 "params": { 00:14:52.161 "timeout_sec": 30 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "bdev_nvme_set_options", 00:14:52.161 "params": { 00:14:52.161 "action_on_timeout": "none", 00:14:52.161 "timeout_us": 0, 00:14:52.161 "timeout_admin_us": 0, 00:14:52.161 "keep_alive_timeout_ms": 10000, 00:14:52.161 "arbitration_burst": 0, 00:14:52.161 "low_priority_weight": 0, 00:14:52.161 "medium_priority_weight": 0, 00:14:52.161 "high_priority_weight": 0, 00:14:52.161 "nvme_adminq_poll_period_us": 10000, 00:14:52.161 "nvme_ioq_poll_period_us": 0, 00:14:52.161 "io_queue_requests": 0, 00:14:52.161 "delay_cmd_submit": true, 00:14:52.161 "transport_retry_count": 4, 00:14:52.161 "bdev_retry_count": 3, 00:14:52.161 "transport_ack_timeout": 0, 00:14:52.161 "ctrlr_loss_timeout_sec": 0, 00:14:52.161 "reconnect_delay_sec": 0, 00:14:52.161 "fast_io_fail_timeout_sec": 0, 00:14:52.161 "disable_auto_failback": false, 00:14:52.161 "generate_uuids": false, 00:14:52.161 "transport_tos": 0, 00:14:52.161 "nvme_error_stat": false, 00:14:52.161 "rdma_srq_size": 0, 00:14:52.161 "io_path_stat": false, 00:14:52.161 "allow_accel_sequence": false, 00:14:52.161 "rdma_max_cq_size": 0, 00:14:52.161 "rdma_cm_event_timeout_ms": 0, 00:14:52.161 "dhchap_digests": [ 00:14:52.161 "sha256", 00:14:52.161 "sha384", 00:14:52.161 "sha512" 00:14:52.161 ], 00:14:52.161 "dhchap_dhgroups": [ 00:14:52.161 "null", 00:14:52.161 "ffdhe2048", 00:14:52.161 "ffdhe3072", 00:14:52.161 "ffdhe4096", 00:14:52.161 "ffdhe6144", 00:14:52.161 "ffdhe8192" 00:14:52.161 ] 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "bdev_nvme_set_hotplug", 00:14:52.161 "params": { 00:14:52.161 "period_us": 100000, 00:14:52.161 "enable": false 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "bdev_malloc_create", 00:14:52.161 "params": { 00:14:52.161 "name": "malloc0", 00:14:52.161 "num_blocks": 8192, 00:14:52.161 "block_size": 4096, 00:14:52.161 "physical_block_size": 4096, 00:14:52.161 "uuid": "29e5b908-1710-40b7-af98-8ff6e84d6263", 00:14:52.161 "optimal_io_boundary": 0 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "bdev_wait_for_examine" 00:14:52.161 } 00:14:52.161 ] 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "subsystem": "nbd", 00:14:52.161 "config": [] 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "subsystem": "scheduler", 00:14:52.161 "config": [ 00:14:52.161 { 00:14:52.161 "method": "framework_set_scheduler", 00:14:52.161 "params": { 00:14:52.161 "name": "static" 00:14:52.161 } 00:14:52.161 } 00:14:52.161 ] 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "subsystem": "nvmf", 00:14:52.161 "config": [ 00:14:52.161 { 00:14:52.161 "method": "nvmf_set_config", 00:14:52.161 "params": { 00:14:52.161 "discovery_filter": "match_any", 00:14:52.161 "admin_cmd_passthru": { 00:14:52.161 "identify_ctrlr": false 00:14:52.161 } 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_set_max_subsystems", 00:14:52.161 "params": { 00:14:52.161 "max_subsystems": 1024 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_set_crdt", 00:14:52.161 "params": { 00:14:52.161 "crdt1": 0, 00:14:52.161 "crdt2": 0, 00:14:52.161 "crdt3": 0 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_create_transport", 00:14:52.161 "params": { 00:14:52.161 "trtype": "TCP", 00:14:52.161 "max_queue_depth": 128, 00:14:52.161 "max_io_qpairs_per_ctrlr": 127, 00:14:52.161 "in_capsule_data_size": 4096, 00:14:52.161 "max_io_size": 131072, 00:14:52.161 "io_unit_size": 131072, 00:14:52.161 "max_aq_depth": 128, 00:14:52.161 "num_shared_buffers": 511, 00:14:52.161 "buf_cache_size": 4294967295, 00:14:52.161 "dif_insert_or_strip": false, 00:14:52.161 "zcopy": false, 00:14:52.161 "c2h_success": false, 00:14:52.161 "sock_priority": 0, 00:14:52.161 "abort_timeout_sec": 1, 00:14:52.161 "ack_timeout": 0, 00:14:52.161 "data_wr_pool_size": 0 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_create_subsystem", 00:14:52.161 "params": { 00:14:52.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.161 "allow_any_host": false, 00:14:52.161 "serial_number": "SPDK00000000000001", 00:14:52.161 "model_number": "SPDK bdev Controller", 00:14:52.161 "max_namespaces": 10, 00:14:52.161 "min_cntlid": 1, 00:14:52.161 "max_cntlid": 65519, 00:14:52.161 "ana_reporting": false 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_subsystem_add_host", 00:14:52.161 "params": { 00:14:52.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.161 "host": "nqn.2016-06.io.spdk:host1", 00:14:52.161 "psk": "/tmp/tmp.2SnfbF5pTF" 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_subsystem_add_ns", 00:14:52.161 "params": { 00:14:52.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.161 "namespace": { 00:14:52.161 "nsid": 1, 00:14:52.161 "bdev_name": "malloc0", 00:14:52.161 "nguid": "29E5B908171040B7AF988FF6E84D6263", 00:14:52.161 "uuid": "29e5b908-1710-40b7-af98-8ff6e84d6263", 00:14:52.161 "no_auto_visible": false 00:14:52.161 } 00:14:52.161 } 00:14:52.161 }, 00:14:52.161 { 00:14:52.161 "method": "nvmf_subsystem_add_listener", 00:14:52.161 "params": { 00:14:52.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.161 "listen_address": { 00:14:52.161 "trtype": "TCP", 00:14:52.161 "adrfam": "IPv4", 00:14:52.161 "traddr": "10.0.0.2", 00:14:52.161 "trsvcid": "4420" 00:14:52.161 }, 00:14:52.161 "secure_channel": true 00:14:52.161 } 00:14:52.161 } 00:14:52.161 ] 00:14:52.161 } 00:14:52.161 ] 00:14:52.161 }' 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85728 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85728 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85728 ']' 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.161 21:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.161 [2024-07-24 21:54:57.733527] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:52.162 [2024-07-24 21:54:57.733840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.162 [2024-07-24 21:54:57.870508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.419 [2024-07-24 21:54:57.961543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.419 [2024-07-24 21:54:57.961595] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.419 [2024-07-24 21:54:57.961621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.419 [2024-07-24 21:54:57.961672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.419 [2024-07-24 21:54:57.961679] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.419 [2024-07-24 21:54:57.961772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.419 [2024-07-24 21:54:58.132041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.690 [2024-07-24 21:54:58.199499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.690 [2024-07-24 21:54:58.215407] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:52.690 [2024-07-24 21:54:58.231392] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:52.690 [2024-07-24 21:54:58.231582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85760 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85760 /var/tmp/bdevperf.sock 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85760 ']' 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.282 21:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:53.282 "subsystems": [ 00:14:53.282 { 00:14:53.282 "subsystem": "keyring", 00:14:53.282 "config": [] 00:14:53.282 }, 00:14:53.282 { 00:14:53.282 "subsystem": "iobuf", 00:14:53.282 "config": [ 00:14:53.282 { 00:14:53.282 "method": "iobuf_set_options", 00:14:53.282 "params": { 00:14:53.282 "small_pool_count": 8192, 00:14:53.282 "large_pool_count": 1024, 00:14:53.282 "small_bufsize": 8192, 00:14:53.282 "large_bufsize": 135168 00:14:53.282 } 00:14:53.282 } 00:14:53.282 ] 00:14:53.282 }, 00:14:53.282 { 00:14:53.282 "subsystem": "sock", 00:14:53.282 "config": [ 00:14:53.282 { 00:14:53.282 "method": "sock_set_default_impl", 00:14:53.282 "params": { 00:14:53.282 "impl_name": "uring" 00:14:53.282 } 00:14:53.282 }, 00:14:53.282 { 00:14:53.282 "method": "sock_impl_set_options", 00:14:53.282 "params": { 00:14:53.282 "impl_name": "ssl", 00:14:53.282 "recv_buf_size": 4096, 00:14:53.282 "send_buf_size": 4096, 00:14:53.282 "enable_recv_pipe": true, 00:14:53.282 "enable_quickack": false, 00:14:53.282 "enable_placement_id": 0, 00:14:53.282 "enable_zerocopy_send_server": true, 00:14:53.282 "enable_zerocopy_send_client": false, 00:14:53.282 "zerocopy_threshold": 0, 00:14:53.282 "tls_version": 0, 00:14:53.282 "enable_ktls": false 00:14:53.282 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "sock_impl_set_options", 00:14:53.283 "params": { 00:14:53.283 "impl_name": "posix", 00:14:53.283 "recv_buf_size": 2097152, 00:14:53.283 "send_buf_size": 2097152, 00:14:53.283 "enable_recv_pipe": true, 00:14:53.283 "enable_quickack": false, 00:14:53.283 "enable_placement_id": 0, 00:14:53.283 "enable_zerocopy_send_server": true, 00:14:53.283 "enable_zerocopy_send_client": false, 00:14:53.283 "zerocopy_threshold": 0, 00:14:53.283 "tls_version": 0, 00:14:53.283 "enable_ktls": false 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "sock_impl_set_options", 00:14:53.283 "params": { 00:14:53.283 "impl_name": "uring", 00:14:53.283 "recv_buf_size": 2097152, 00:14:53.283 "send_buf_size": 2097152, 00:14:53.283 "enable_recv_pipe": true, 00:14:53.283 "enable_quickack": false, 00:14:53.283 "enable_placement_id": 0, 00:14:53.283 "enable_zerocopy_send_server": false, 00:14:53.283 "enable_zerocopy_send_client": false, 00:14:53.283 "zerocopy_threshold": 0, 00:14:53.283 "tls_version": 0, 00:14:53.283 "enable_ktls": false 00:14:53.283 } 00:14:53.283 } 00:14:53.283 ] 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "subsystem": "vmd", 00:14:53.283 "config": [] 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "subsystem": "accel", 00:14:53.283 "config": [ 00:14:53.283 { 00:14:53.283 "method": "accel_set_options", 00:14:53.283 "params": { 00:14:53.283 "small_cache_size": 128, 00:14:53.283 "large_cache_size": 16, 00:14:53.283 "task_count": 2048, 00:14:53.283 "sequence_count": 2048, 00:14:53.283 "buf_count": 2048 00:14:53.283 } 00:14:53.283 } 00:14:53.283 ] 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "subsystem": "bdev", 00:14:53.283 "config": [ 00:14:53.283 { 00:14:53.283 "method": "bdev_set_options", 00:14:53.283 "params": { 00:14:53.283 "bdev_io_pool_size": 65535, 00:14:53.283 "bdev_io_cache_size": 256, 00:14:53.283 "bdev_auto_examine": true, 00:14:53.283 "iobuf_small_cache_size": 128, 00:14:53.283 "iobuf_large_cache_size": 16 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_raid_set_options", 00:14:53.283 "params": { 00:14:53.283 "process_window_size_kb": 1024 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_iscsi_set_options", 00:14:53.283 "params": { 00:14:53.283 "timeout_sec": 30 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_nvme_set_options", 00:14:53.283 "params": { 00:14:53.283 "action_on_timeout": "none", 00:14:53.283 "timeout_us": 0, 00:14:53.283 "timeout_admin_us": 0, 00:14:53.283 "keep_alive_timeout_ms": 10000, 00:14:53.283 "arbitration_burst": 0, 00:14:53.283 "low_priority_weight": 0, 00:14:53.283 "medium_priority_weight": 0, 00:14:53.283 "high_priority_weight": 0, 00:14:53.283 "nvme_adminq_poll_period_us": 10000, 00:14:53.283 "nvme_ioq_poll_period_us": 0, 00:14:53.283 "io_queue_requests": 512, 00:14:53.283 "delay_cmd_submit": true, 00:14:53.283 "transport_retry_count": 4, 00:14:53.283 "bdev_retry_count": 3, 00:14:53.283 "transport_ack_timeout": 0, 00:14:53.283 "ctrlr_loss_timeout_sec": 0, 00:14:53.283 "reconnect_delay_sec": 0, 00:14:53.283 "fast_io_fail_timeout_sec": 0, 00:14:53.283 "disable_auto_failback": false, 00:14:53.283 "generate_uuids": false, 00:14:53.283 "transport_tos": 0, 00:14:53.283 "nvme_error_stat": false, 00:14:53.283 "rdma_srq_size": 0, 00:14:53.283 "io_path_stat": false, 00:14:53.283 "allow_accel_sequence": false, 00:14:53.283 "rdma_max_cq_size": 0, 00:14:53.283 "rdma_cm_event_timeout_ms": 0, 00:14:53.283 "dhchap_digests": [ 00:14:53.283 "sha256", 00:14:53.283 "sha384", 00:14:53.283 "sha512" 00:14:53.283 ], 00:14:53.283 "dhchap_dhgroups": [ 00:14:53.283 "null", 00:14:53.283 "ffdhe2048", 00:14:53.283 "ffdhe3072", 00:14:53.283 "ffdhe4096", 00:14:53.283 "ffdhe6144", 00:14:53.283 "ffdhe8192" 00:14:53.283 ] 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_nvme_attach_controller", 00:14:53.283 "params": { 00:14:53.283 "name": "TLSTEST", 00:14:53.283 "trtype": "TCP", 00:14:53.283 "adrfam": "IPv4", 00:14:53.283 "traddr": "10.0.0.2", 00:14:53.283 "trsvcid": "4420", 00:14:53.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.283 "prchk_reftag": false, 00:14:53.283 "prchk_guard": false, 00:14:53.283 "ctrlr_loss_timeout_sec": 0, 00:14:53.283 "reconnect_delay_sec": 0, 00:14:53.283 "fast_io_fail_timeout_sec": 0, 00:14:53.283 "psk": "/tmp/tmp.2SnfbF5pTF", 00:14:53.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.283 "hdgst": false, 00:14:53.283 "ddgst": false 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_nvme_set_hotplug", 00:14:53.283 "params": { 00:14:53.283 "period_us": 100000, 00:14:53.283 "enable": false 00:14:53.283 } 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "method": "bdev_wait_for_examine" 00:14:53.283 } 00:14:53.283 ] 00:14:53.283 }, 00:14:53.283 { 00:14:53.283 "subsystem": "nbd", 00:14:53.283 "config": [] 00:14:53.283 } 00:14:53.283 ] 00:14:53.283 }' 00:14:53.283 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.283 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.283 21:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.283 [2024-07-24 21:54:58.795387] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:53.283 [2024-07-24 21:54:58.795603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85760 ] 00:14:53.283 [2024-07-24 21:54:58.934639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.542 [2024-07-24 21:54:59.045239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.542 [2024-07-24 21:54:59.184101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:53.542 [2024-07-24 21:54:59.221979] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.542 [2024-07-24 21:54:59.222299] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:54.108 21:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.108 21:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:54.108 21:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:54.366 Running I/O for 10 seconds... 00:15:04.334 00:15:04.334 Latency(us) 00:15:04.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.334 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:04.334 Verification LBA range: start 0x0 length 0x2000 00:15:04.334 TLSTESTn1 : 10.02 4138.66 16.17 0.00 0.00 30868.39 6374.87 34793.66 00:15:04.334 =================================================================================================================== 00:15:04.334 Total : 4138.66 16.17 0.00 0.00 30868.39 6374.87 34793.66 00:15:04.334 0 00:15:04.334 21:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.334 21:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85760 00:15:04.334 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85760 ']' 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85760 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85760 00:15:04.335 killing process with pid 85760 00:15:04.335 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.335 00:15:04.335 Latency(us) 00:15:04.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.335 =================================================================================================================== 00:15:04.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85760' 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85760 00:15:04.335 [2024-07-24 21:55:09.945150] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:04.335 21:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85760 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85728 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85728 ']' 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85728 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85728 00:15:04.594 killing process with pid 85728 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85728' 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85728 00:15:04.594 [2024-07-24 21:55:10.177885] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:04.594 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85728 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85902 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85902 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85902 ']' 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.852 21:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.852 [2024-07-24 21:55:10.442152] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:04.852 [2024-07-24 21:55:10.442243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.111 [2024-07-24 21:55:10.581087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.111 [2024-07-24 21:55:10.675792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.111 [2024-07-24 21:55:10.675845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.111 [2024-07-24 21:55:10.675860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.111 [2024-07-24 21:55:10.675870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.111 [2024-07-24 21:55:10.675880] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.111 [2024-07-24 21:55:10.675913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.111 [2024-07-24 21:55:10.734983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:05.677 21:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:05.677 21:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:05.677 21:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.677 21:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.677 21:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.936 21:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.936 21:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.2SnfbF5pTF 00:15:05.936 21:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2SnfbF5pTF 00:15:05.936 21:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:06.195 [2024-07-24 21:55:11.711312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.195 21:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:06.454 21:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:06.454 [2024-07-24 21:55:12.167439] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.454 [2024-07-24 21:55:12.167713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.712 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:06.971 malloc0 00:15:06.971 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.971 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2SnfbF5pTF 00:15:07.231 [2024-07-24 21:55:12.879239] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85951 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85951 /var/tmp/bdevperf.sock 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85951 ']' 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:07.231 21:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.231 [2024-07-24 21:55:12.946061] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:07.231 [2024-07-24 21:55:12.946327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85951 ] 00:15:07.504 [2024-07-24 21:55:13.078316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.504 [2024-07-24 21:55:13.174747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.771 [2024-07-24 21:55:13.229421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:08.338 21:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:08.338 21:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:08.338 21:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2SnfbF5pTF 00:15:08.598 21:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:08.598 [2024-07-24 21:55:14.307074] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.856 nvme0n1 00:15:08.856 21:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:08.856 Running I/O for 1 seconds... 00:15:10.233 00:15:10.233 Latency(us) 00:15:10.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:10.233 Verification LBA range: start 0x0 length 0x2000 00:15:10.233 nvme0n1 : 1.02 4047.44 15.81 0.00 0.00 31172.16 4676.89 20614.05 00:15:10.233 =================================================================================================================== 00:15:10.233 Total : 4047.44 15.81 0.00 0.00 31172.16 4676.89 20614.05 00:15:10.233 0 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85951 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85951 ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85951 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85951 00:15:10.233 killing process with pid 85951 00:15:10.233 Received shutdown signal, test time was about 1.000000 seconds 00:15:10.233 00:15:10.233 Latency(us) 00:15:10.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.233 =================================================================================================================== 00:15:10.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85951' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85951 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85951 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85902 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85902 ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85902 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85902 00:15:10.233 killing process with pid 85902 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85902' 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85902 00:15:10.233 [2024-07-24 21:55:15.797368] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:10.233 21:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85902 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86002 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86002 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86002 ']' 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:10.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:10.492 21:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.492 [2024-07-24 21:55:16.070052] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:10.492 [2024-07-24 21:55:16.070143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.492 [2024-07-24 21:55:16.207480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.750 [2024-07-24 21:55:16.288878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.750 [2024-07-24 21:55:16.288930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.750 [2024-07-24 21:55:16.288958] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.750 [2024-07-24 21:55:16.288967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.750 [2024-07-24 21:55:16.288990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.750 [2024-07-24 21:55:16.289038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.750 [2024-07-24 21:55:16.344905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.691 [2024-07-24 21:55:17.104794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.691 malloc0 00:15:11.691 [2024-07-24 21:55:17.136378] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:11.691 [2024-07-24 21:55:17.136615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=86034 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 86034 /var/tmp/bdevperf.sock 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86034 ']' 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.691 21:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.691 [2024-07-24 21:55:17.211133] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:11.691 [2024-07-24 21:55:17.211369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86034 ] 00:15:11.691 [2024-07-24 21:55:17.348882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.950 [2024-07-24 21:55:17.434708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.950 [2024-07-24 21:55:17.494535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:12.521 21:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.521 21:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:12.521 21:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2SnfbF5pTF 00:15:12.780 21:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:13.038 [2024-07-24 21:55:18.689971] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.297 nvme0n1 00:15:13.297 21:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:13.297 Running I/O for 1 seconds... 00:15:14.233 00:15:14.233 Latency(us) 00:15:14.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:14.233 Verification LBA range: start 0x0 length 0x2000 00:15:14.233 nvme0n1 : 1.03 4116.51 16.08 0.00 0.00 30757.73 7149.38 19779.96 00:15:14.233 =================================================================================================================== 00:15:14.233 Total : 4116.51 16.08 0.00 0.00 30757.73 7149.38 19779.96 00:15:14.233 0 00:15:14.233 21:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:14.233 21:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.233 21:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.491 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.491 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:14.491 "subsystems": [ 00:15:14.491 { 00:15:14.491 "subsystem": "keyring", 00:15:14.491 "config": [ 00:15:14.491 { 00:15:14.491 "method": "keyring_file_add_key", 00:15:14.491 "params": { 00:15:14.491 "name": "key0", 00:15:14.491 "path": "/tmp/tmp.2SnfbF5pTF" 00:15:14.491 } 00:15:14.491 } 00:15:14.491 ] 00:15:14.491 }, 00:15:14.491 { 00:15:14.492 "subsystem": "iobuf", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "iobuf_set_options", 00:15:14.492 "params": { 00:15:14.492 "small_pool_count": 8192, 00:15:14.492 "large_pool_count": 1024, 00:15:14.492 "small_bufsize": 8192, 00:15:14.492 "large_bufsize": 135168 00:15:14.492 } 00:15:14.492 } 00:15:14.492 ] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "sock", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "sock_set_default_impl", 00:15:14.492 "params": { 00:15:14.492 "impl_name": "uring" 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "sock_impl_set_options", 00:15:14.492 "params": { 00:15:14.492 "impl_name": "ssl", 00:15:14.492 "recv_buf_size": 4096, 00:15:14.492 "send_buf_size": 4096, 00:15:14.492 "enable_recv_pipe": true, 00:15:14.492 "enable_quickack": false, 00:15:14.492 "enable_placement_id": 0, 00:15:14.492 "enable_zerocopy_send_server": true, 00:15:14.492 "enable_zerocopy_send_client": false, 00:15:14.492 "zerocopy_threshold": 0, 00:15:14.492 "tls_version": 0, 00:15:14.492 "enable_ktls": false 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "sock_impl_set_options", 00:15:14.492 "params": { 00:15:14.492 "impl_name": "posix", 00:15:14.492 "recv_buf_size": 2097152, 00:15:14.492 "send_buf_size": 2097152, 00:15:14.492 "enable_recv_pipe": true, 00:15:14.492 "enable_quickack": false, 00:15:14.492 "enable_placement_id": 0, 00:15:14.492 "enable_zerocopy_send_server": true, 00:15:14.492 "enable_zerocopy_send_client": false, 00:15:14.492 "zerocopy_threshold": 0, 00:15:14.492 "tls_version": 0, 00:15:14.492 "enable_ktls": false 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "sock_impl_set_options", 00:15:14.492 "params": { 00:15:14.492 "impl_name": "uring", 00:15:14.492 "recv_buf_size": 2097152, 00:15:14.492 "send_buf_size": 2097152, 00:15:14.492 "enable_recv_pipe": true, 00:15:14.492 "enable_quickack": false, 00:15:14.492 "enable_placement_id": 0, 00:15:14.492 "enable_zerocopy_send_server": false, 00:15:14.492 "enable_zerocopy_send_client": false, 00:15:14.492 "zerocopy_threshold": 0, 00:15:14.492 "tls_version": 0, 00:15:14.492 "enable_ktls": false 00:15:14.492 } 00:15:14.492 } 00:15:14.492 ] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "vmd", 00:15:14.492 "config": [] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "accel", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "accel_set_options", 00:15:14.492 "params": { 00:15:14.492 "small_cache_size": 128, 00:15:14.492 "large_cache_size": 16, 00:15:14.492 "task_count": 2048, 00:15:14.492 "sequence_count": 2048, 00:15:14.492 "buf_count": 2048 00:15:14.492 } 00:15:14.492 } 00:15:14.492 ] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "bdev", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "bdev_set_options", 00:15:14.492 "params": { 00:15:14.492 "bdev_io_pool_size": 65535, 00:15:14.492 "bdev_io_cache_size": 256, 00:15:14.492 "bdev_auto_examine": true, 00:15:14.492 "iobuf_small_cache_size": 128, 00:15:14.492 "iobuf_large_cache_size": 16 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_raid_set_options", 00:15:14.492 "params": { 00:15:14.492 "process_window_size_kb": 1024 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_iscsi_set_options", 00:15:14.492 "params": { 00:15:14.492 "timeout_sec": 30 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_nvme_set_options", 00:15:14.492 "params": { 00:15:14.492 "action_on_timeout": "none", 00:15:14.492 "timeout_us": 0, 00:15:14.492 "timeout_admin_us": 0, 00:15:14.492 "keep_alive_timeout_ms": 10000, 00:15:14.492 "arbitration_burst": 0, 00:15:14.492 "low_priority_weight": 0, 00:15:14.492 "medium_priority_weight": 0, 00:15:14.492 "high_priority_weight": 0, 00:15:14.492 "nvme_adminq_poll_period_us": 10000, 00:15:14.492 "nvme_ioq_poll_period_us": 0, 00:15:14.492 "io_queue_requests": 0, 00:15:14.492 "delay_cmd_submit": true, 00:15:14.492 "transport_retry_count": 4, 00:15:14.492 "bdev_retry_count": 3, 00:15:14.492 "transport_ack_timeout": 0, 00:15:14.492 "ctrlr_loss_timeout_sec": 0, 00:15:14.492 "reconnect_delay_sec": 0, 00:15:14.492 "fast_io_fail_timeout_sec": 0, 00:15:14.492 "disable_auto_failback": false, 00:15:14.492 "generate_uuids": false, 00:15:14.492 "transport_tos": 0, 00:15:14.492 "nvme_error_stat": false, 00:15:14.492 "rdma_srq_size": 0, 00:15:14.492 "io_path_stat": false, 00:15:14.492 "allow_accel_sequence": false, 00:15:14.492 "rdma_max_cq_size": 0, 00:15:14.492 "rdma_cm_event_timeout_ms": 0, 00:15:14.492 "dhchap_digests": [ 00:15:14.492 "sha256", 00:15:14.492 "sha384", 00:15:14.492 "sha512" 00:15:14.492 ], 00:15:14.492 "dhchap_dhgroups": [ 00:15:14.492 "null", 00:15:14.492 "ffdhe2048", 00:15:14.492 "ffdhe3072", 00:15:14.492 "ffdhe4096", 00:15:14.492 "ffdhe6144", 00:15:14.492 "ffdhe8192" 00:15:14.492 ] 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_nvme_set_hotplug", 00:15:14.492 "params": { 00:15:14.492 "period_us": 100000, 00:15:14.492 "enable": false 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_malloc_create", 00:15:14.492 "params": { 00:15:14.492 "name": "malloc0", 00:15:14.492 "num_blocks": 8192, 00:15:14.492 "block_size": 4096, 00:15:14.492 "physical_block_size": 4096, 00:15:14.492 "uuid": "167d2828-dddf-4cff-bfcc-817f88235291", 00:15:14.492 "optimal_io_boundary": 0 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "bdev_wait_for_examine" 00:15:14.492 } 00:15:14.492 ] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "nbd", 00:15:14.492 "config": [] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "scheduler", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "framework_set_scheduler", 00:15:14.492 "params": { 00:15:14.492 "name": "static" 00:15:14.492 } 00:15:14.492 } 00:15:14.492 ] 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "subsystem": "nvmf", 00:15:14.492 "config": [ 00:15:14.492 { 00:15:14.492 "method": "nvmf_set_config", 00:15:14.492 "params": { 00:15:14.492 "discovery_filter": "match_any", 00:15:14.492 "admin_cmd_passthru": { 00:15:14.492 "identify_ctrlr": false 00:15:14.492 } 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_set_max_subsystems", 00:15:14.492 "params": { 00:15:14.492 "max_subsystems": 1024 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_set_crdt", 00:15:14.492 "params": { 00:15:14.492 "crdt1": 0, 00:15:14.492 "crdt2": 0, 00:15:14.492 "crdt3": 0 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_create_transport", 00:15:14.492 "params": { 00:15:14.492 "trtype": "TCP", 00:15:14.492 "max_queue_depth": 128, 00:15:14.492 "max_io_qpairs_per_ctrlr": 127, 00:15:14.492 "in_capsule_data_size": 4096, 00:15:14.492 "max_io_size": 131072, 00:15:14.492 "io_unit_size": 131072, 00:15:14.492 "max_aq_depth": 128, 00:15:14.492 "num_shared_buffers": 511, 00:15:14.492 "buf_cache_size": 4294967295, 00:15:14.492 "dif_insert_or_strip": false, 00:15:14.492 "zcopy": false, 00:15:14.492 "c2h_success": false, 00:15:14.492 "sock_priority": 0, 00:15:14.492 "abort_timeout_sec": 1, 00:15:14.492 "ack_timeout": 0, 00:15:14.492 "data_wr_pool_size": 0 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_create_subsystem", 00:15:14.492 "params": { 00:15:14.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.492 "allow_any_host": false, 00:15:14.492 "serial_number": "00000000000000000000", 00:15:14.492 "model_number": "SPDK bdev Controller", 00:15:14.492 "max_namespaces": 32, 00:15:14.492 "min_cntlid": 1, 00:15:14.492 "max_cntlid": 65519, 00:15:14.492 "ana_reporting": false 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_subsystem_add_host", 00:15:14.492 "params": { 00:15:14.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.492 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.492 "psk": "key0" 00:15:14.492 } 00:15:14.492 }, 00:15:14.492 { 00:15:14.492 "method": "nvmf_subsystem_add_ns", 00:15:14.492 "params": { 00:15:14.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.493 "namespace": { 00:15:14.493 "nsid": 1, 00:15:14.493 "bdev_name": "malloc0", 00:15:14.493 "nguid": "167D2828DDDF4CFFBFCC817F88235291", 00:15:14.493 "uuid": "167d2828-dddf-4cff-bfcc-817f88235291", 00:15:14.493 "no_auto_visible": false 00:15:14.493 } 00:15:14.493 } 00:15:14.493 }, 00:15:14.493 { 00:15:14.493 "method": "nvmf_subsystem_add_listener", 00:15:14.493 "params": { 00:15:14.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.493 "listen_address": { 00:15:14.493 "trtype": "TCP", 00:15:14.493 "adrfam": "IPv4", 00:15:14.493 "traddr": "10.0.0.2", 00:15:14.493 "trsvcid": "4420" 00:15:14.493 }, 00:15:14.493 "secure_channel": true 00:15:14.493 } 00:15:14.493 } 00:15:14.493 ] 00:15:14.493 } 00:15:14.493 ] 00:15:14.493 }' 00:15:14.493 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:14.753 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:14.753 "subsystems": [ 00:15:14.753 { 00:15:14.753 "subsystem": "keyring", 00:15:14.753 "config": [ 00:15:14.753 { 00:15:14.753 "method": "keyring_file_add_key", 00:15:14.753 "params": { 00:15:14.753 "name": "key0", 00:15:14.753 "path": "/tmp/tmp.2SnfbF5pTF" 00:15:14.753 } 00:15:14.753 } 00:15:14.753 ] 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "subsystem": "iobuf", 00:15:14.753 "config": [ 00:15:14.753 { 00:15:14.753 "method": "iobuf_set_options", 00:15:14.753 "params": { 00:15:14.753 "small_pool_count": 8192, 00:15:14.753 "large_pool_count": 1024, 00:15:14.753 "small_bufsize": 8192, 00:15:14.753 "large_bufsize": 135168 00:15:14.753 } 00:15:14.753 } 00:15:14.753 ] 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "subsystem": "sock", 00:15:14.753 "config": [ 00:15:14.753 { 00:15:14.753 "method": "sock_set_default_impl", 00:15:14.753 "params": { 00:15:14.753 "impl_name": "uring" 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "sock_impl_set_options", 00:15:14.753 "params": { 00:15:14.753 "impl_name": "ssl", 00:15:14.753 "recv_buf_size": 4096, 00:15:14.753 "send_buf_size": 4096, 00:15:14.753 "enable_recv_pipe": true, 00:15:14.753 "enable_quickack": false, 00:15:14.753 "enable_placement_id": 0, 00:15:14.753 "enable_zerocopy_send_server": true, 00:15:14.753 "enable_zerocopy_send_client": false, 00:15:14.753 "zerocopy_threshold": 0, 00:15:14.753 "tls_version": 0, 00:15:14.753 "enable_ktls": false 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "sock_impl_set_options", 00:15:14.753 "params": { 00:15:14.753 "impl_name": "posix", 00:15:14.753 "recv_buf_size": 2097152, 00:15:14.753 "send_buf_size": 2097152, 00:15:14.753 "enable_recv_pipe": true, 00:15:14.753 "enable_quickack": false, 00:15:14.753 "enable_placement_id": 0, 00:15:14.753 "enable_zerocopy_send_server": true, 00:15:14.753 "enable_zerocopy_send_client": false, 00:15:14.753 "zerocopy_threshold": 0, 00:15:14.753 "tls_version": 0, 00:15:14.753 "enable_ktls": false 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "sock_impl_set_options", 00:15:14.753 "params": { 00:15:14.753 "impl_name": "uring", 00:15:14.753 "recv_buf_size": 2097152, 00:15:14.753 "send_buf_size": 2097152, 00:15:14.753 "enable_recv_pipe": true, 00:15:14.753 "enable_quickack": false, 00:15:14.753 "enable_placement_id": 0, 00:15:14.753 "enable_zerocopy_send_server": false, 00:15:14.753 "enable_zerocopy_send_client": false, 00:15:14.753 "zerocopy_threshold": 0, 00:15:14.753 "tls_version": 0, 00:15:14.753 "enable_ktls": false 00:15:14.753 } 00:15:14.753 } 00:15:14.753 ] 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "subsystem": "vmd", 00:15:14.753 "config": [] 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "subsystem": "accel", 00:15:14.753 "config": [ 00:15:14.753 { 00:15:14.753 "method": "accel_set_options", 00:15:14.753 "params": { 00:15:14.753 "small_cache_size": 128, 00:15:14.753 "large_cache_size": 16, 00:15:14.753 "task_count": 2048, 00:15:14.753 "sequence_count": 2048, 00:15:14.753 "buf_count": 2048 00:15:14.753 } 00:15:14.753 } 00:15:14.753 ] 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "subsystem": "bdev", 00:15:14.753 "config": [ 00:15:14.753 { 00:15:14.753 "method": "bdev_set_options", 00:15:14.753 "params": { 00:15:14.753 "bdev_io_pool_size": 65535, 00:15:14.753 "bdev_io_cache_size": 256, 00:15:14.753 "bdev_auto_examine": true, 00:15:14.753 "iobuf_small_cache_size": 128, 00:15:14.753 "iobuf_large_cache_size": 16 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_raid_set_options", 00:15:14.753 "params": { 00:15:14.753 "process_window_size_kb": 1024 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_iscsi_set_options", 00:15:14.753 "params": { 00:15:14.753 "timeout_sec": 30 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_nvme_set_options", 00:15:14.753 "params": { 00:15:14.753 "action_on_timeout": "none", 00:15:14.753 "timeout_us": 0, 00:15:14.753 "timeout_admin_us": 0, 00:15:14.753 "keep_alive_timeout_ms": 10000, 00:15:14.753 "arbitration_burst": 0, 00:15:14.753 "low_priority_weight": 0, 00:15:14.753 "medium_priority_weight": 0, 00:15:14.753 "high_priority_weight": 0, 00:15:14.753 "nvme_adminq_poll_period_us": 10000, 00:15:14.753 "nvme_ioq_poll_period_us": 0, 00:15:14.753 "io_queue_requests": 512, 00:15:14.753 "delay_cmd_submit": true, 00:15:14.753 "transport_retry_count": 4, 00:15:14.753 "bdev_retry_count": 3, 00:15:14.753 "transport_ack_timeout": 0, 00:15:14.753 "ctrlr_loss_timeout_sec": 0, 00:15:14.753 "reconnect_delay_sec": 0, 00:15:14.753 "fast_io_fail_timeout_sec": 0, 00:15:14.753 "disable_auto_failback": false, 00:15:14.753 "generate_uuids": false, 00:15:14.753 "transport_tos": 0, 00:15:14.753 "nvme_error_stat": false, 00:15:14.753 "rdma_srq_size": 0, 00:15:14.753 "io_path_stat": false, 00:15:14.753 "allow_accel_sequence": false, 00:15:14.753 "rdma_max_cq_size": 0, 00:15:14.753 "rdma_cm_event_timeout_ms": 0, 00:15:14.753 "dhchap_digests": [ 00:15:14.753 "sha256", 00:15:14.753 "sha384", 00:15:14.753 "sha512" 00:15:14.753 ], 00:15:14.753 "dhchap_dhgroups": [ 00:15:14.753 "null", 00:15:14.753 "ffdhe2048", 00:15:14.753 "ffdhe3072", 00:15:14.753 "ffdhe4096", 00:15:14.753 "ffdhe6144", 00:15:14.753 "ffdhe8192" 00:15:14.753 ] 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_nvme_attach_controller", 00:15:14.753 "params": { 00:15:14.753 "name": "nvme0", 00:15:14.753 "trtype": "TCP", 00:15:14.753 "adrfam": "IPv4", 00:15:14.753 "traddr": "10.0.0.2", 00:15:14.753 "trsvcid": "4420", 00:15:14.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.753 "prchk_reftag": false, 00:15:14.753 "prchk_guard": false, 00:15:14.753 "ctrlr_loss_timeout_sec": 0, 00:15:14.753 "reconnect_delay_sec": 0, 00:15:14.753 "fast_io_fail_timeout_sec": 0, 00:15:14.753 "psk": "key0", 00:15:14.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.753 "hdgst": false, 00:15:14.753 "ddgst": false 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_nvme_set_hotplug", 00:15:14.753 "params": { 00:15:14.753 "period_us": 100000, 00:15:14.753 "enable": false 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.753 "method": "bdev_enable_histogram", 00:15:14.753 "params": { 00:15:14.753 "name": "nvme0n1", 00:15:14.753 "enable": true 00:15:14.753 } 00:15:14.753 }, 00:15:14.753 { 00:15:14.754 "method": "bdev_wait_for_examine" 00:15:14.754 } 00:15:14.754 ] 00:15:14.754 }, 00:15:14.754 { 00:15:14.754 "subsystem": "nbd", 00:15:14.754 "config": [] 00:15:14.754 } 00:15:14.754 ] 00:15:14.754 }' 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 86034 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86034 ']' 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86034 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86034 00:15:14.754 killing process with pid 86034 00:15:14.754 Received shutdown signal, test time was about 1.000000 seconds 00:15:14.754 00:15:14.754 Latency(us) 00:15:14.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.754 =================================================================================================================== 00:15:14.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86034' 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86034 00:15:14.754 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86034 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 86002 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86002 ']' 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86002 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86002 00:15:15.013 killing process with pid 86002 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86002' 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86002 00:15:15.013 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86002 00:15:15.272 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:15.272 21:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.272 21:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:15.272 "subsystems": [ 00:15:15.272 { 00:15:15.272 "subsystem": "keyring", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "keyring_file_add_key", 00:15:15.272 "params": { 00:15:15.272 "name": "key0", 00:15:15.272 "path": "/tmp/tmp.2SnfbF5pTF" 00:15:15.272 } 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "iobuf", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "iobuf_set_options", 00:15:15.272 "params": { 00:15:15.272 "small_pool_count": 8192, 00:15:15.272 "large_pool_count": 1024, 00:15:15.272 "small_bufsize": 8192, 00:15:15.272 "large_bufsize": 135168 00:15:15.272 } 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "sock", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "sock_set_default_impl", 00:15:15.272 "params": { 00:15:15.272 "impl_name": "uring" 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "sock_impl_set_options", 00:15:15.272 "params": { 00:15:15.272 "impl_name": "ssl", 00:15:15.272 "recv_buf_size": 4096, 00:15:15.272 "send_buf_size": 4096, 00:15:15.272 "enable_recv_pipe": true, 00:15:15.272 "enable_quickack": false, 00:15:15.272 "enable_placement_id": 0, 00:15:15.272 "enable_zerocopy_send_server": true, 00:15:15.272 "enable_zerocopy_send_client": false, 00:15:15.272 "zerocopy_threshold": 0, 00:15:15.272 "tls_version": 0, 00:15:15.272 "enable_ktls": false 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "sock_impl_set_options", 00:15:15.272 "params": { 00:15:15.272 "impl_name": "posix", 00:15:15.272 "recv_buf_size": 2097152, 00:15:15.272 "send_buf_size": 2097152, 00:15:15.272 "enable_recv_pipe": true, 00:15:15.272 "enable_quickack": false, 00:15:15.272 "enable_placement_id": 0, 00:15:15.272 "enable_zerocopy_send_server": true, 00:15:15.272 "enable_zerocopy_send_client": false, 00:15:15.272 "zerocopy_threshold": 0, 00:15:15.272 "tls_version": 0, 00:15:15.272 "enable_ktls": false 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "sock_impl_set_options", 00:15:15.272 "params": { 00:15:15.272 "impl_name": "uring", 00:15:15.272 "recv_buf_size": 2097152, 00:15:15.272 "send_buf_size": 2097152, 00:15:15.272 "enable_recv_pipe": true, 00:15:15.272 "enable_quickack": false, 00:15:15.272 "enable_placement_id": 0, 00:15:15.272 "enable_zerocopy_send_server": false, 00:15:15.272 "enable_zerocopy_send_client": false, 00:15:15.272 "zerocopy_threshold": 0, 00:15:15.272 "tls_version": 0, 00:15:15.272 "enable_ktls": false 00:15:15.272 } 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "vmd", 00:15:15.272 "config": [] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "accel", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "accel_set_options", 00:15:15.272 "params": { 00:15:15.272 "small_cache_size": 128, 00:15:15.272 "large_cache_size": 16, 00:15:15.272 "task_count": 2048, 00:15:15.272 "sequence_count": 2048, 00:15:15.272 "buf_count": 2048 00:15:15.272 } 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "bdev", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "bdev_set_options", 00:15:15.272 "params": { 00:15:15.272 "bdev_io_pool_size": 65535, 00:15:15.272 "bdev_io_cache_size": 256, 00:15:15.272 "bdev_auto_examine": true, 00:15:15.272 "iobuf_small_cache_size": 128, 00:15:15.272 "iobuf_large_cache_size": 16 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_raid_set_options", 00:15:15.272 "params": { 00:15:15.272 "process_window_size_kb": 1024 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_iscsi_set_options", 00:15:15.272 "params": { 00:15:15.272 "timeout_sec": 30 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_nvme_set_options", 00:15:15.272 "params": { 00:15:15.272 "action_on_timeout": "none", 00:15:15.272 "timeout_us": 0, 00:15:15.272 "timeout_admin_us": 0, 00:15:15.272 "keep_alive_timeout_ms": 10000, 00:15:15.272 "arbitration_burst": 0, 00:15:15.272 "low_priority_weight": 0, 00:15:15.272 "medium_priority_weight": 0, 00:15:15.272 "high_priority_weight": 0, 00:15:15.272 "nvme_adminq_poll_period_us": 10000, 00:15:15.272 "nvme_ioq_poll_period_us": 0, 00:15:15.272 "io_queue_requests": 0, 00:15:15.272 "delay_cmd_submit": true, 00:15:15.272 "transport_retry_count": 4, 00:15:15.272 "bdev_retry_count": 3, 00:15:15.272 "transport_ack_timeout": 0, 00:15:15.272 "ctrlr_loss_timeout_sec": 0, 00:15:15.272 "reconnect_delay_sec": 0, 00:15:15.272 "fast_io_fail_timeout_sec": 0, 00:15:15.272 "disable_auto_failback": false, 00:15:15.272 "generate_uuids": false, 00:15:15.272 "transport_tos": 0, 00:15:15.272 "nvme_error_stat": false, 00:15:15.272 "rdma_srq_size": 0, 00:15:15.272 "io_path_stat": false, 00:15:15.272 "allow_accel_sequence": false, 00:15:15.272 "rdma_max_cq_size": 0, 00:15:15.272 "rdma_cm_event_timeout_ms": 0, 00:15:15.272 "dhchap_digests": [ 00:15:15.272 "sha256", 00:15:15.272 "sha384", 00:15:15.272 "sha512" 00:15:15.272 ], 00:15:15.272 "dhchap_dhgroups": [ 00:15:15.272 "null", 00:15:15.272 "ffdhe2048", 00:15:15.272 "ffdhe3072", 00:15:15.272 "ffdhe4096", 00:15:15.272 "ffdhe6144", 00:15:15.272 "ffdhe8192" 00:15:15.272 ] 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_nvme_set_hotplug", 00:15:15.272 "params": { 00:15:15.272 "period_us": 100000, 00:15:15.272 "enable": false 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_malloc_create", 00:15:15.272 "params": { 00:15:15.272 "name": "malloc0", 00:15:15.272 "num_blocks": 8192, 00:15:15.272 "block_size": 4096, 00:15:15.272 "physical_block_size": 4096, 00:15:15.272 "uuid": "167d2828-dddf-4cff-bfcc-817f88235291", 00:15:15.272 "optimal_io_boundary": 0 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "bdev_wait_for_examine" 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "nbd", 00:15:15.272 "config": [] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "scheduler", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "framework_set_scheduler", 00:15:15.272 "params": { 00:15:15.272 "name": "static" 00:15:15.272 } 00:15:15.272 } 00:15:15.272 ] 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "subsystem": "nvmf", 00:15:15.272 "config": [ 00:15:15.272 { 00:15:15.272 "method": "nvmf_set_config", 00:15:15.272 "params": { 00:15:15.272 "discovery_filter": "match_any", 00:15:15.272 "admin_cmd_passthru": { 00:15:15.272 "identify_ctrlr": false 00:15:15.272 } 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "nvmf_set_max_subsystems", 00:15:15.272 "params": { 00:15:15.272 "max_subsystems": 1024 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "nvmf_set_crdt", 00:15:15.272 "params": { 00:15:15.272 "crdt1": 0, 00:15:15.272 "crdt2": 0, 00:15:15.272 "crdt3": 0 00:15:15.272 } 00:15:15.272 }, 00:15:15.272 { 00:15:15.272 "method": "nvmf_create_transport", 00:15:15.272 "params": { 00:15:15.272 "trtype": "TCP", 00:15:15.272 "max_queue_depth": 128, 00:15:15.272 "max_io_qpairs_per_ctrlr": 127, 00:15:15.272 "in_capsule_data_size": 4096, 00:15:15.272 "max_io_size": 131072, 00:15:15.272 "io_unit_size": 131072, 00:15:15.272 "max_aq_depth": 128, 00:15:15.272 "num_shared_buffers": 511, 00:15:15.272 "buf_cache_size": 4294967295, 00:15:15.272 "dif_insert_or_strip": false, 00:15:15.272 "zcopy": false, 00:15:15.272 "c2h_success": false, 00:15:15.272 "sock_priority": 0, 00:15:15.272 "abort_timeout_sec": 1, 00:15:15.272 "ack_timeout": 0, 00:15:15.273 "data_wr_pool_size": 0 00:15:15.273 } 00:15:15.273 }, 00:15:15.273 { 00:15:15.273 "method": "nvmf_create_subsystem", 00:15:15.273 "params": { 00:15:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.273 "allow_any_host": false, 00:15:15.273 "serial_number": "00000000000000000000", 00:15:15.273 "model_number": "SPDK bdev Controller", 00:15:15.273 "max_namespaces": 32, 00:15:15.273 "min_cntlid": 1, 00:15:15.273 "max_cntlid": 65519, 00:15:15.273 "ana_reporting": false 00:15:15.273 } 00:15:15.273 }, 00:15:15.273 { 00:15:15.273 "method": "nvmf_subsystem_add_host", 00:15:15.273 "params": { 00:15:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.273 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.273 "psk": "key0" 00:15:15.273 } 00:15:15.273 }, 00:15:15.273 { 00:15:15.273 "method": "nvmf_subsystem_add_ns", 00:15:15.273 "params": { 00:15:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.273 "namespace": { 00:15:15.273 "nsid": 1, 00:15:15.273 "bdev_name": "malloc0", 00:15:15.273 "nguid": "167D2828DDDF4CFFBFCC817F88235291", 00:15:15.273 "uuid": "167d2828-dddf-4cff-bfcc-817f88235291", 00:15:15.273 "no_auto_visible": false 00:15:15.273 } 00:15:15.273 } 00:15:15.273 }, 00:15:15.273 { 00:15:15.273 "method": "nvmf_subsystem_add_listener", 00:15:15.273 "params": { 00:15:15.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.273 "listen_address": { 00:15:15.273 "trtype": "TCP", 00:15:15.273 "adrfam": "IPv4", 00:15:15.273 "traddr": "10.0.0.2", 00:15:15.273 "trsvcid": "4420" 00:15:15.273 }, 00:15:15.273 "secure_channel": true 00:15:15.273 } 00:15:15.273 } 00:15:15.273 ] 00:15:15.273 } 00:15:15.273 ] 00:15:15.273 }' 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86095 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86095 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86095 ']' 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.273 21:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.273 [2024-07-24 21:55:20.929013] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:15.273 [2024-07-24 21:55:20.929233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.532 [2024-07-24 21:55:21.060891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.532 [2024-07-24 21:55:21.140993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.532 [2024-07-24 21:55:21.141197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.532 [2024-07-24 21:55:21.141317] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.532 [2024-07-24 21:55:21.141440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.532 [2024-07-24 21:55:21.141474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.532 [2024-07-24 21:55:21.141642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.790 [2024-07-24 21:55:21.310945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:15.790 [2024-07-24 21:55:21.386080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.790 [2024-07-24 21:55:21.418025] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.790 [2024-07-24 21:55:21.418230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=86127 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 86127 /var/tmp/bdevperf.sock 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86127 ']' 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.357 21:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:16.357 "subsystems": [ 00:15:16.357 { 00:15:16.357 "subsystem": "keyring", 00:15:16.357 "config": [ 00:15:16.357 { 00:15:16.357 "method": "keyring_file_add_key", 00:15:16.357 "params": { 00:15:16.357 "name": "key0", 00:15:16.357 "path": "/tmp/tmp.2SnfbF5pTF" 00:15:16.357 } 00:15:16.357 } 00:15:16.357 ] 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "subsystem": "iobuf", 00:15:16.357 "config": [ 00:15:16.357 { 00:15:16.357 "method": "iobuf_set_options", 00:15:16.357 "params": { 00:15:16.357 "small_pool_count": 8192, 00:15:16.357 "large_pool_count": 1024, 00:15:16.357 "small_bufsize": 8192, 00:15:16.357 "large_bufsize": 135168 00:15:16.357 } 00:15:16.357 } 00:15:16.357 ] 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "subsystem": "sock", 00:15:16.357 "config": [ 00:15:16.357 { 00:15:16.357 "method": "sock_set_default_impl", 00:15:16.357 "params": { 00:15:16.357 "impl_name": "uring" 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "sock_impl_set_options", 00:15:16.357 "params": { 00:15:16.357 "impl_name": "ssl", 00:15:16.357 "recv_buf_size": 4096, 00:15:16.357 "send_buf_size": 4096, 00:15:16.357 "enable_recv_pipe": true, 00:15:16.357 "enable_quickack": false, 00:15:16.357 "enable_placement_id": 0, 00:15:16.357 "enable_zerocopy_send_server": true, 00:15:16.357 "enable_zerocopy_send_client": false, 00:15:16.357 "zerocopy_threshold": 0, 00:15:16.357 "tls_version": 0, 00:15:16.357 "enable_ktls": false 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "sock_impl_set_options", 00:15:16.357 "params": { 00:15:16.357 "impl_name": "posix", 00:15:16.357 "recv_buf_size": 2097152, 00:15:16.357 "send_buf_size": 2097152, 00:15:16.357 "enable_recv_pipe": true, 00:15:16.357 "enable_quickack": false, 00:15:16.357 "enable_placement_id": 0, 00:15:16.357 "enable_zerocopy_send_server": true, 00:15:16.357 "enable_zerocopy_send_client": false, 00:15:16.357 "zerocopy_threshold": 0, 00:15:16.357 "tls_version": 0, 00:15:16.357 "enable_ktls": false 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "sock_impl_set_options", 00:15:16.357 "params": { 00:15:16.357 "impl_name": "uring", 00:15:16.357 "recv_buf_size": 2097152, 00:15:16.357 "send_buf_size": 2097152, 00:15:16.357 "enable_recv_pipe": true, 00:15:16.357 "enable_quickack": false, 00:15:16.357 "enable_placement_id": 0, 00:15:16.357 "enable_zerocopy_send_server": false, 00:15:16.357 "enable_zerocopy_send_client": false, 00:15:16.357 "zerocopy_threshold": 0, 00:15:16.357 "tls_version": 0, 00:15:16.357 "enable_ktls": false 00:15:16.357 } 00:15:16.357 } 00:15:16.357 ] 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "subsystem": "vmd", 00:15:16.357 "config": [] 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "subsystem": "accel", 00:15:16.357 "config": [ 00:15:16.357 { 00:15:16.357 "method": "accel_set_options", 00:15:16.357 "params": { 00:15:16.357 "small_cache_size": 128, 00:15:16.357 "large_cache_size": 16, 00:15:16.357 "task_count": 2048, 00:15:16.357 "sequence_count": 2048, 00:15:16.357 "buf_count": 2048 00:15:16.357 } 00:15:16.357 } 00:15:16.357 ] 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "subsystem": "bdev", 00:15:16.357 "config": [ 00:15:16.357 { 00:15:16.357 "method": "bdev_set_options", 00:15:16.357 "params": { 00:15:16.357 "bdev_io_pool_size": 65535, 00:15:16.357 "bdev_io_cache_size": 256, 00:15:16.357 "bdev_auto_examine": true, 00:15:16.357 "iobuf_small_cache_size": 128, 00:15:16.357 "iobuf_large_cache_size": 16 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "bdev_raid_set_options", 00:15:16.357 "params": { 00:15:16.357 "process_window_size_kb": 1024 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "bdev_iscsi_set_options", 00:15:16.357 "params": { 00:15:16.357 "timeout_sec": 30 00:15:16.357 } 00:15:16.357 }, 00:15:16.357 { 00:15:16.357 "method": "bdev_nvme_set_options", 00:15:16.357 "params": { 00:15:16.357 "action_on_timeout": "none", 00:15:16.357 "timeout_us": 0, 00:15:16.357 "timeout_admin_us": 0, 00:15:16.357 "keep_alive_timeout_ms": 10000, 00:15:16.357 "arbitration_burst": 0, 00:15:16.357 "low_priority_weight": 0, 00:15:16.357 "medium_priority_weight": 0, 00:15:16.357 "high_priority_weight": 0, 00:15:16.357 "nvme_adminq_poll_period_us": 10000, 00:15:16.357 "nvme_ioq_poll_period_us": 0, 00:15:16.357 "io_queue_requests": 512, 00:15:16.357 "delay_cmd_submit": true, 00:15:16.357 "transport_retry_count": 4, 00:15:16.357 "bdev_retry_count": 3, 00:15:16.357 "transport_ack_timeout": 0, 00:15:16.357 "ctrlr_loss_timeout_sec": 0, 00:15:16.357 "reconnect_delay_sec": 0, 00:15:16.357 "fast_io_fail_timeout_sec": 0, 00:15:16.357 "disable_auto_failback": false, 00:15:16.357 "generate_uuids": false, 00:15:16.357 "transport_tos": 0, 00:15:16.357 "nvme_error_stat": false, 00:15:16.357 "rdma_srq_size": 0, 00:15:16.357 "io_path_stat": false, 00:15:16.357 "allow_accel_sequence": false, 00:15:16.357 "rdma_max_cq_size": 0, 00:15:16.357 "rdma_cm_event_timeout_ms": 0, 00:15:16.357 "dhchap_digests": [ 00:15:16.357 "sha256", 00:15:16.357 "sha384", 00:15:16.357 "sha512" 00:15:16.357 ], 00:15:16.357 "dhchap_dhgroups": [ 00:15:16.358 "null", 00:15:16.358 "ffdhe2048", 00:15:16.358 "ffdhe3072", 00:15:16.358 "ffdhe4096", 00:15:16.358 "ffdhe6144", 00:15:16.358 "ffdhe8192" 00:15:16.358 ] 00:15:16.358 } 00:15:16.358 }, 00:15:16.358 { 00:15:16.358 "method": "bdev_nvme_attach_controller", 00:15:16.358 "params": { 00:15:16.358 "name": "nvme0", 00:15:16.358 "trtype": "TCP", 00:15:16.358 "adrfam": "IPv4", 00:15:16.358 "traddr": "10.0.0.2", 00:15:16.358 "trsvcid": "4420", 00:15:16.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.358 "prchk_reftag": false, 00:15:16.358 "prchk_guard": false, 00:15:16.358 "ctrlr_loss_timeout_sec": 0, 00:15:16.358 "reconnect_delay_sec": 0, 00:15:16.358 "fast_io_fail_timeout_sec": 0, 00:15:16.358 "psk": "key0", 00:15:16.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.358 "hdgst": false, 00:15:16.358 "ddgst": false 00:15:16.358 } 00:15:16.358 }, 00:15:16.358 { 00:15:16.358 "method": "bdev_nvme_set_hotplug", 00:15:16.358 "params": { 00:15:16.358 "period_us": 100000, 00:15:16.358 "enable": false 00:15:16.358 } 00:15:16.358 }, 00:15:16.358 { 00:15:16.358 "method": "bdev_enable_histogram", 00:15:16.358 "params": { 00:15:16.358 "name": "nvme0n1", 00:15:16.358 "enable": true 00:15:16.358 } 00:15:16.358 }, 00:15:16.358 { 00:15:16.358 "method": "bdev_wait_for_examine" 00:15:16.358 } 00:15:16.358 ] 00:15:16.358 }, 00:15:16.358 { 00:15:16.358 "subsystem": "nbd", 00:15:16.358 "config": [] 00:15:16.358 } 00:15:16.358 ] 00:15:16.358 }' 00:15:16.358 [2024-07-24 21:55:21.965815] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:16.358 [2024-07-24 21:55:21.965901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86127 ] 00:15:16.616 [2024-07-24 21:55:22.103059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.616 [2024-07-24 21:55:22.185519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.616 [2024-07-24 21:55:22.320512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.874 [2024-07-24 21:55:22.362929] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.439 21:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.439 21:55:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:17.439 21:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:17.439 21:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:17.697 21:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.697 21:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.697 Running I/O for 1 seconds... 00:15:19.071 00:15:19.071 Latency(us) 00:15:19.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.071 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.071 Verification LBA range: start 0x0 length 0x2000 00:15:19.071 nvme0n1 : 1.02 4022.33 15.71 0.00 0.00 31464.26 7804.74 20733.21 00:15:19.071 =================================================================================================================== 00:15:19.071 Total : 4022.33 15.71 0.00 0.00 31464.26 7804.74 20733.21 00:15:19.071 0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:19.071 nvmf_trace.0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 86127 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86127 ']' 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86127 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86127 00:15:19.071 killing process with pid 86127 00:15:19.071 Received shutdown signal, test time was about 1.000000 seconds 00:15:19.071 00:15:19.071 Latency(us) 00:15:19.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.071 =================================================================================================================== 00:15:19.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:19.071 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86127' 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86127 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86127 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.072 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.072 rmmod nvme_tcp 00:15:19.072 rmmod nvme_fabrics 00:15:19.072 rmmod nvme_keyring 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 86095 ']' 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 86095 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86095 ']' 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86095 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86095 00:15:19.330 killing process with pid 86095 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86095' 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86095 00:15:19.330 21:55:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86095 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.330 21:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.590 21:55:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:19.590 21:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3QPn3cKz19 /tmp/tmp.fhx5PdlXe6 /tmp/tmp.2SnfbF5pTF 00:15:19.590 00:15:19.590 real 1m24.840s 00:15:19.590 user 2m14.202s 00:15:19.590 sys 0m27.393s 00:15:19.590 21:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:19.590 21:55:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 ************************************ 00:15:19.590 END TEST nvmf_tls 00:15:19.590 ************************************ 00:15:19.590 21:55:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:19.590 21:55:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:19.590 21:55:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:19.590 21:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:19.590 ************************************ 00:15:19.590 START TEST nvmf_fips 00:15:19.590 ************************************ 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:19.590 * Looking for test storage... 00:15:19.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.590 21:55:25 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:19.591 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:19.850 Error setting digest 00:15:19.850 00B2F8858D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:19.850 00B2F8858D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:19.850 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:19.851 Cannot find device "nvmf_tgt_br" 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.851 Cannot find device "nvmf_tgt_br2" 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:19.851 Cannot find device "nvmf_tgt_br" 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:19.851 Cannot find device "nvmf_tgt_br2" 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.851 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:20.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:20.110 00:15:20.110 --- 10.0.0.2 ping statistics --- 00:15:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.110 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:20.110 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.110 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:20.110 00:15:20.110 --- 10.0.0.3 ping statistics --- 00:15:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.110 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:20.110 00:15:20.110 --- 10.0.0.1 ping statistics --- 00:15:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.110 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:20.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86400 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86400 00:15:20.110 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 86400 ']' 00:15:20.111 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.111 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:20.111 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.111 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:20.111 21:55:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:20.369 [2024-07-24 21:55:25.832955] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:20.369 [2024-07-24 21:55:25.833196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.369 [2024-07-24 21:55:25.970835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.369 [2024-07-24 21:55:26.064416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.369 [2024-07-24 21:55:26.064478] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.369 [2024-07-24 21:55:26.064506] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.369 [2024-07-24 21:55:26.064517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.369 [2024-07-24 21:55:26.064527] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.369 [2024-07-24 21:55:26.064555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.627 [2024-07-24 21:55:26.122625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:21.223 21:55:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.492 [2024-07-24 21:55:27.084271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.492 [2024-07-24 21:55:27.100207] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:21.492 [2024-07-24 21:55:27.100459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.492 [2024-07-24 21:55:27.131382] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:21.492 malloc0 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86435 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86435 /var/tmp/bdevperf.sock 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 86435 ']' 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.492 21:55:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:21.750 [2024-07-24 21:55:27.238638] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:21.750 [2024-07-24 21:55:27.238738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86435 ] 00:15:21.750 [2024-07-24 21:55:27.380438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.750 [2024-07-24 21:55:27.463499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.008 [2024-07-24 21:55:27.522227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:22.576 21:55:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:22.576 21:55:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:22.576 21:55:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:22.835 [2024-07-24 21:55:28.430182] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.835 [2024-07-24 21:55:28.430323] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:22.835 TLSTESTn1 00:15:22.835 21:55:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.093 Running I/O for 10 seconds... 00:15:33.066 00:15:33.066 Latency(us) 00:15:33.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:33.066 Verification LBA range: start 0x0 length 0x2000 00:15:33.066 TLSTESTn1 : 10.02 3985.89 15.57 0.00 0.00 32046.75 6970.65 30980.65 00:15:33.066 =================================================================================================================== 00:15:33.066 Total : 3985.89 15.57 0.00 0.00 32046.75 6970.65 30980.65 00:15:33.066 0 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:33.066 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:33.066 nvmf_trace.0 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86435 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 86435 ']' 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 86435 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86435 00:15:33.326 killing process with pid 86435 00:15:33.326 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.326 00:15:33.326 Latency(us) 00:15:33.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.326 =================================================================================================================== 00:15:33.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86435' 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 86435 00:15:33.326 [2024-07-24 21:55:38.812089] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:33.326 21:55:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 86435 00:15:33.326 21:55:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:33.326 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.326 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.585 rmmod nvme_tcp 00:15:33.585 rmmod nvme_fabrics 00:15:33.585 rmmod nvme_keyring 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86400 ']' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86400 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 86400 ']' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 86400 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86400 00:15:33.585 killing process with pid 86400 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86400' 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 86400 00:15:33.585 [2024-07-24 21:55:39.148275] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:33.585 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 86400 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:33.844 00:15:33.844 real 0m14.277s 00:15:33.844 user 0m19.578s 00:15:33.844 sys 0m5.750s 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 ************************************ 00:15:33.844 END TEST nvmf_fips 00:15:33.844 ************************************ 00:15:33.844 21:55:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:15:33.844 21:55:39 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:33.844 21:55:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:33.844 21:55:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:33.844 21:55:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 ************************************ 00:15:33.844 START TEST nvmf_fuzz 00:15:33.844 ************************************ 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:33.844 * Looking for test storage... 00:15:33.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.844 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.102 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:34.103 Cannot find device "nvmf_tgt_br" 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.103 Cannot find device "nvmf_tgt_br2" 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:34.103 Cannot find device "nvmf_tgt_br" 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:34.103 Cannot find device "nvmf_tgt_br2" 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.103 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.361 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:34.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:15:34.361 00:15:34.362 --- 10.0.0.2 ping statistics --- 00:15:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.362 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:34.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:34.362 00:15:34.362 --- 10.0.0.3 ping statistics --- 00:15:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.362 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:34.362 00:15:34.362 --- 10.0.0.1 ping statistics --- 00:15:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.362 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86760 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86760 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 86760 ']' 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:34.362 21:55:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.298 21:55:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.298 Malloc0 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.298 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:15:35.557 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:15:35.815 Shutting down the fuzz application 00:15:35.815 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:36.073 Shutting down the fuzz application 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.073 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.332 rmmod nvme_tcp 00:15:36.332 rmmod nvme_fabrics 00:15:36.332 rmmod nvme_keyring 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 86760 ']' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 86760 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 86760 ']' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 86760 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86760 00:15:36.332 killing process with pid 86760 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86760' 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 86760 00:15:36.332 21:55:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 86760 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:36.592 00:15:36.592 real 0m2.693s 00:15:36.592 user 0m2.853s 00:15:36.592 sys 0m0.651s 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:36.592 21:55:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 ************************************ 00:15:36.592 END TEST nvmf_fuzz 00:15:36.592 ************************************ 00:15:36.592 21:55:42 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:36.592 21:55:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:36.592 21:55:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:36.592 21:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 ************************************ 00:15:36.592 START TEST nvmf_multiconnection 00:15:36.592 ************************************ 00:15:36.592 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:36.592 * Looking for test storage... 00:15:36.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.592 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.592 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.593 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:36.852 Cannot find device "nvmf_tgt_br" 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.852 Cannot find device "nvmf_tgt_br2" 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:36.852 Cannot find device "nvmf_tgt_br" 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:36.852 Cannot find device "nvmf_tgt_br2" 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.852 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.853 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.111 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.111 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:37.111 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:37.111 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:37.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:15:37.112 00:15:37.112 --- 10.0.0.2 ping statistics --- 00:15:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.112 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:37.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:37.112 00:15:37.112 --- 10.0.0.3 ping statistics --- 00:15:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.112 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:37.112 00:15:37.112 --- 10.0.0.1 ping statistics --- 00:15:37.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.112 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=86951 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 86951 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 86951 ']' 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:37.112 21:55:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.112 [2024-07-24 21:55:42.717340] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:37.112 [2024-07-24 21:55:42.717424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.370 [2024-07-24 21:55:42.851950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.371 [2024-07-24 21:55:42.927169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.371 [2024-07-24 21:55:42.927501] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.371 [2024-07-24 21:55:42.927684] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.371 [2024-07-24 21:55:42.927811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.371 [2024-07-24 21:55:42.927853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.371 [2024-07-24 21:55:42.928310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.371 [2024-07-24 21:55:42.928493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.371 [2024-07-24 21:55:42.929182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.371 [2024-07-24 21:55:42.929195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.371 [2024-07-24 21:55:42.983680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.371 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 [2024-07-24 21:55:43.090964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 Malloc1 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 [2024-07-24 21:55:43.163414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.629 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.629 Malloc2 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 Malloc3 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 Malloc4 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.630 Malloc5 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.630 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 Malloc6 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 Malloc7 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.890 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 Malloc8 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 Malloc9 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 Malloc10 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.891 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 Malloc11 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:38.149 21:55:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:40.678 21:55:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:42.574 21:55:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:15:42.574 21:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:15:42.574 21:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:42.574 21:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.574 21:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:42.574 21:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:44.525 21:55:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:15:44.783 21:55:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:44.783 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:44.783 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.783 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:44.783 21:55:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:46.685 21:55:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:15:46.944 21:55:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:46.944 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:46.944 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.944 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:46.944 21:55:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:48.901 21:55:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:51.436 21:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:53.339 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:53.339 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:53.340 21:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.241 21:56:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:15:55.500 21:56:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:55.500 21:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:55.500 21:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.500 21:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:55.500 21:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.402 21:56:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:15:57.661 21:56:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:57.661 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:57.661 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.661 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:57.661 21:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.637 21:56:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:15:59.895 21:56:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:59.895 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:59.895 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.895 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:59.895 21:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:16:01.797 21:56:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:01.797 [global] 00:16:01.797 thread=1 00:16:01.797 invalidate=1 00:16:01.797 rw=read 00:16:01.797 time_based=1 00:16:01.797 runtime=10 00:16:01.797 ioengine=libaio 00:16:01.797 direct=1 00:16:01.797 bs=262144 00:16:01.797 iodepth=64 00:16:01.797 norandommap=1 00:16:01.797 numjobs=1 00:16:01.797 00:16:01.797 [job0] 00:16:01.797 filename=/dev/nvme0n1 00:16:01.797 [job1] 00:16:01.797 filename=/dev/nvme10n1 00:16:01.797 [job2] 00:16:01.797 filename=/dev/nvme1n1 00:16:01.797 [job3] 00:16:01.797 filename=/dev/nvme2n1 00:16:01.797 [job4] 00:16:01.797 filename=/dev/nvme3n1 00:16:01.797 [job5] 00:16:01.797 filename=/dev/nvme4n1 00:16:01.797 [job6] 00:16:01.797 filename=/dev/nvme5n1 00:16:01.797 [job7] 00:16:01.797 filename=/dev/nvme6n1 00:16:01.797 [job8] 00:16:01.797 filename=/dev/nvme7n1 00:16:01.797 [job9] 00:16:01.797 filename=/dev/nvme8n1 00:16:01.797 [job10] 00:16:01.797 filename=/dev/nvme9n1 00:16:02.055 Could not set queue depth (nvme0n1) 00:16:02.055 Could not set queue depth (nvme10n1) 00:16:02.055 Could not set queue depth (nvme1n1) 00:16:02.055 Could not set queue depth (nvme2n1) 00:16:02.055 Could not set queue depth (nvme3n1) 00:16:02.055 Could not set queue depth (nvme4n1) 00:16:02.055 Could not set queue depth (nvme5n1) 00:16:02.055 Could not set queue depth (nvme6n1) 00:16:02.055 Could not set queue depth (nvme7n1) 00:16:02.055 Could not set queue depth (nvme8n1) 00:16:02.055 Could not set queue depth (nvme9n1) 00:16:02.055 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:02.055 fio-3.35 00:16:02.055 Starting 11 threads 00:16:14.265 00:16:14.265 job0: (groupid=0, jobs=1): err= 0: pid=87401: Wed Jul 24 21:56:18 2024 00:16:14.265 read: IOPS=574, BW=144MiB/s (151MB/s)(1447MiB/10073msec) 00:16:14.265 slat (usec): min=22, max=59796, avg=1725.72, stdev=4071.72 00:16:14.265 clat (msec): min=16, max=170, avg=109.52, stdev=10.62 00:16:14.265 lat (msec): min=19, max=170, avg=111.25, stdev=10.58 00:16:14.265 clat percentiles (msec): 00:16:14.265 | 1.00th=[ 82], 5.00th=[ 93], 10.00th=[ 99], 20.00th=[ 103], 00:16:14.265 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:16:14.265 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 126], 00:16:14.265 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:16:14.265 | 99.99th=[ 171] 00:16:14.265 bw ( KiB/s): min=134144, max=150528, per=8.56%, avg=146494.15, stdev=4495.90, samples=20 00:16:14.265 iops : min= 524, max= 588, avg=572.20, stdev=17.57, samples=20 00:16:14.265 lat (msec) : 20=0.05%, 50=0.03%, 100=14.01%, 250=85.90% 00:16:14.265 cpu : usr=0.36%, sys=2.40%, ctx=1239, majf=0, minf=4097 00:16:14.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.265 issued rwts: total=5787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.265 job1: (groupid=0, jobs=1): err= 0: pid=87403: Wed Jul 24 21:56:18 2024 00:16:14.265 read: IOPS=413, BW=103MiB/s (108MB/s)(1046MiB/10120msec) 00:16:14.265 slat (usec): min=19, max=47258, avg=2368.03, stdev=5484.45 00:16:14.265 clat (msec): min=22, max=273, avg=152.12, stdev=23.93 00:16:14.265 lat (msec): min=23, max=285, avg=154.49, stdev=24.54 00:16:14.265 clat percentiles (msec): 00:16:14.265 | 1.00th=[ 58], 5.00th=[ 101], 10.00th=[ 114], 20.00th=[ 150], 00:16:14.265 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:16:14.265 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 174], 00:16:14.265 | 99.00th=[ 190], 99.50th=[ 222], 99.90th=[ 275], 99.95th=[ 275], 00:16:14.265 | 99.99th=[ 275] 00:16:14.265 bw ( KiB/s): min=97280, max=158720, per=6.16%, avg=105472.75, stdev=14138.44, samples=20 00:16:14.265 iops : min= 380, max= 620, avg=411.85, stdev=55.27, samples=20 00:16:14.265 lat (msec) : 50=0.72%, 100=4.35%, 250=94.77%, 500=0.17% 00:16:14.265 cpu : usr=0.30%, sys=1.87%, ctx=1054, majf=0, minf=4097 00:16:14.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.265 issued rwts: total=4184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.265 job2: (groupid=0, jobs=1): err= 0: pid=87404: Wed Jul 24 21:56:18 2024 00:16:14.265 read: IOPS=413, BW=103MiB/s (108MB/s)(1046MiB/10129msec) 00:16:14.265 slat (usec): min=19, max=43432, avg=2388.71, stdev=5568.59 00:16:14.265 clat (msec): min=19, max=295, avg=152.30, stdev=22.71 00:16:14.265 lat (msec): min=22, max=295, avg=154.69, stdev=23.27 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 72], 5.00th=[ 104], 10.00th=[ 118], 20.00th=[ 150], 00:16:14.266 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:16:14.266 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 174], 00:16:14.266 | 99.00th=[ 190], 99.50th=[ 228], 99.90th=[ 255], 99.95th=[ 255], 00:16:14.266 | 99.99th=[ 296] 00:16:14.266 bw ( KiB/s): min=94422, max=153088, per=6.16%, avg=105431.50, stdev=13272.38, samples=20 00:16:14.266 iops : min= 368, max= 598, avg=411.80, stdev=51.88, samples=20 00:16:14.266 lat (msec) : 20=0.02%, 50=0.45%, 100=3.61%, 250=95.70%, 500=0.22% 00:16:14.266 cpu : usr=0.20%, sys=2.02%, ctx=966, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=4185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job3: (groupid=0, jobs=1): err= 0: pid=87405: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=409, BW=102MiB/s (107MB/s)(1034MiB/10109msec) 00:16:14.266 slat (usec): min=19, max=53452, avg=2414.41, stdev=5636.44 00:16:14.266 clat (msec): min=26, max=283, avg=153.81, stdev=20.04 00:16:14.266 lat (msec): min=26, max=283, avg=156.23, stdev=20.50 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 90], 5.00th=[ 111], 10.00th=[ 134], 20.00th=[ 150], 00:16:14.266 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:16:14.266 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:16:14.266 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 271], 99.95th=[ 271], 00:16:14.266 | 99.99th=[ 284] 00:16:14.266 bw ( KiB/s): min=96256, max=131072, per=6.12%, avg=104652.37, stdev=8990.43, samples=19 00:16:14.266 iops : min= 376, max= 512, avg=408.74, stdev=35.13, samples=19 00:16:14.266 lat (msec) : 50=0.31%, 100=2.18%, 250=97.15%, 500=0.36% 00:16:14.266 cpu : usr=0.17%, sys=1.44%, ctx=967, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=4135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job4: (groupid=0, jobs=1): err= 0: pid=87406: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=427, BW=107MiB/s (112MB/s)(1081MiB/10120msec) 00:16:14.266 slat (usec): min=22, max=113984, avg=2287.84, stdev=5650.36 00:16:14.266 clat (msec): min=24, max=275, avg=147.22, stdev=32.19 00:16:14.266 lat (msec): min=25, max=297, avg=149.51, stdev=32.83 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 43], 5.00th=[ 78], 10.00th=[ 89], 20.00th=[ 148], 00:16:14.266 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:16:14.266 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:16:14.266 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 275], 00:16:14.266 | 99.99th=[ 275] 00:16:14.266 bw ( KiB/s): min=94984, max=183296, per=6.37%, avg=108992.40, stdev=22959.94, samples=20 00:16:14.266 iops : min= 371, max= 716, avg=425.75, stdev=89.69, samples=20 00:16:14.266 lat (msec) : 50=1.74%, 100=12.80%, 250=84.91%, 500=0.56% 00:16:14.266 cpu : usr=0.28%, sys=1.70%, ctx=1006, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=4322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job5: (groupid=0, jobs=1): err= 0: pid=87407: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=412, BW=103MiB/s (108MB/s)(1044MiB/10126msec) 00:16:14.266 slat (usec): min=18, max=43454, avg=2390.44, stdev=5492.95 00:16:14.266 clat (msec): min=22, max=286, avg=152.53, stdev=23.45 00:16:14.266 lat (msec): min=22, max=286, avg=154.92, stdev=23.97 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 61], 5.00th=[ 104], 10.00th=[ 114], 20.00th=[ 150], 00:16:14.266 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:16:14.266 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 176], 00:16:14.266 | 99.00th=[ 188], 99.50th=[ 222], 99.90th=[ 268], 99.95th=[ 284], 00:16:14.266 | 99.99th=[ 288] 00:16:14.266 bw ( KiB/s): min=96256, max=147968, per=6.16%, avg=105308.20, stdev=14083.40, samples=20 00:16:14.266 iops : min= 376, max= 578, avg=411.35, stdev=55.02, samples=20 00:16:14.266 lat (msec) : 50=0.65%, 100=2.75%, 250=96.43%, 500=0.17% 00:16:14.266 cpu : usr=0.25%, sys=1.73%, ctx=969, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=4177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job6: (groupid=0, jobs=1): err= 0: pid=87408: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=582, BW=146MiB/s (153MB/s)(1466MiB/10073msec) 00:16:14.266 slat (usec): min=13, max=79461, avg=1697.73, stdev=4130.36 00:16:14.266 clat (msec): min=6, max=168, avg=108.05, stdev=15.12 00:16:14.266 lat (msec): min=6, max=168, avg=109.75, stdev=15.24 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 20], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 104], 00:16:14.266 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 111], 00:16:14.266 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 125], 00:16:14.266 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 165], 99.95th=[ 169], 00:16:14.266 | 99.99th=[ 169] 00:16:14.266 bw ( KiB/s): min=136704, max=171863, per=8.68%, avg=148482.00, stdev=6435.73, samples=20 00:16:14.266 iops : min= 534, max= 671, avg=579.95, stdev=25.06, samples=20 00:16:14.266 lat (msec) : 10=0.31%, 20=0.72%, 50=0.80%, 100=10.90%, 250=87.28% 00:16:14.266 cpu : usr=0.32%, sys=2.41%, ctx=1202, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=5863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job7: (groupid=0, jobs=1): err= 0: pid=87409: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=578, BW=145MiB/s (152MB/s)(1454MiB/10058msec) 00:16:14.266 slat (usec): min=19, max=31927, avg=1715.47, stdev=3872.28 00:16:14.266 clat (msec): min=46, max=177, avg=108.86, stdev= 9.10 00:16:14.266 lat (msec): min=50, max=177, avg=110.57, stdev= 9.12 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 89], 5.00th=[ 96], 10.00th=[ 100], 20.00th=[ 103], 00:16:14.266 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 111], 00:16:14.266 | 70.00th=[ 113], 80.00th=[ 115], 90.00th=[ 120], 95.00th=[ 123], 00:16:14.266 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 169], 99.95th=[ 178], 00:16:14.266 | 99.99th=[ 178] 00:16:14.266 bw ( KiB/s): min=140288, max=150528, per=8.61%, avg=147299.15, stdev=2805.15, samples=20 00:16:14.266 iops : min= 548, max= 588, avg=575.35, stdev=10.96, samples=20 00:16:14.266 lat (msec) : 50=0.02%, 100=12.64%, 250=87.34% 00:16:14.266 cpu : usr=0.30%, sys=2.52%, ctx=1242, majf=0, minf=4097 00:16:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.266 issued rwts: total=5814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.266 job8: (groupid=0, jobs=1): err= 0: pid=87410: Wed Jul 24 21:56:18 2024 00:16:14.266 read: IOPS=421, BW=105MiB/s (111MB/s)(1068MiB/10126msec) 00:16:14.266 slat (usec): min=22, max=70372, avg=2333.30, stdev=5604.35 00:16:14.266 clat (msec): min=16, max=279, avg=149.19, stdev=29.96 00:16:14.266 lat (msec): min=19, max=286, avg=151.52, stdev=30.51 00:16:14.266 clat percentiles (msec): 00:16:14.266 | 1.00th=[ 48], 5.00th=[ 75], 10.00th=[ 99], 20.00th=[ 150], 00:16:14.266 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:16:14.266 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 174], 00:16:14.266 | 99.00th=[ 192], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 271], 00:16:14.266 | 99.99th=[ 279] 00:16:14.266 bw ( KiB/s): min=87552, max=177664, per=6.29%, avg=107689.05, stdev=21888.25, samples=20 00:16:14.266 iops : min= 342, max= 694, avg=420.65, stdev=85.50, samples=20 00:16:14.266 lat (msec) : 20=0.07%, 50=1.01%, 100=10.07%, 250=88.41%, 500=0.44% 00:16:14.267 cpu : usr=0.23%, sys=1.70%, ctx=962, majf=0, minf=4097 00:16:14.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:14.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.267 issued rwts: total=4271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.267 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.267 job9: (groupid=0, jobs=1): err= 0: pid=87411: Wed Jul 24 21:56:18 2024 00:16:14.267 read: IOPS=1918, BW=480MiB/s (503MB/s)(4799MiB/10007msec) 00:16:14.267 slat (usec): min=17, max=43737, avg=517.36, stdev=1229.74 00:16:14.267 clat (msec): min=6, max=127, avg=32.79, stdev=10.31 00:16:14.267 lat (msec): min=7, max=127, avg=33.31, stdev=10.43 00:16:14.267 clat percentiles (msec): 00:16:14.267 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:16:14.267 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:16:14.267 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 35], 00:16:14.267 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 125], 00:16:14.267 | 99.99th=[ 128] 00:16:14.267 bw ( KiB/s): min=172032, max=529408, per=28.59%, avg=489067.79, stdev=96217.04, samples=19 00:16:14.267 iops : min= 672, max= 2068, avg=1910.42, stdev=375.85, samples=19 00:16:14.267 lat (msec) : 10=0.02%, 20=0.09%, 50=96.68%, 100=2.42%, 250=0.79% 00:16:14.267 cpu : usr=0.79%, sys=5.60%, ctx=3683, majf=0, minf=4097 00:16:14.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:14.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.267 issued rwts: total=19194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.267 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.267 job10: (groupid=0, jobs=1): err= 0: pid=87412: Wed Jul 24 21:56:18 2024 00:16:14.267 read: IOPS=572, BW=143MiB/s (150MB/s)(1440MiB/10059msec) 00:16:14.267 slat (usec): min=17, max=45252, avg=1732.22, stdev=4144.07 00:16:14.267 clat (msec): min=28, max=183, avg=109.95, stdev= 9.76 00:16:14.267 lat (msec): min=28, max=183, avg=111.68, stdev= 9.74 00:16:14.267 clat percentiles (msec): 00:16:14.267 | 1.00th=[ 87], 5.00th=[ 96], 10.00th=[ 100], 20.00th=[ 104], 00:16:14.267 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:16:14.267 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 127], 00:16:14.267 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 169], 99.95th=[ 169], 00:16:14.267 | 99.99th=[ 184] 00:16:14.267 bw ( KiB/s): min=119022, max=152064, per=8.53%, avg=145903.20, stdev=6770.98, samples=20 00:16:14.267 iops : min= 464, max= 594, avg=569.85, stdev=26.63, samples=20 00:16:14.267 lat (msec) : 50=0.10%, 100=10.84%, 250=89.06% 00:16:14.267 cpu : usr=0.29%, sys=2.14%, ctx=1160, majf=0, minf=4097 00:16:14.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:14.267 issued rwts: total=5759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.267 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:14.267 00:16:14.267 Run status group 0 (all jobs): 00:16:14.267 READ: bw=1671MiB/s (1752MB/s), 102MiB/s-480MiB/s (107MB/s-503MB/s), io=16.5GiB (17.7GB), run=10007-10129msec 00:16:14.267 00:16:14.267 Disk stats (read/write): 00:16:14.267 nvme0n1: ios=11456/0, merge=0/0, ticks=1236631/0, in_queue=1236631, util=97.85% 00:16:14.267 nvme10n1: ios=8243/0, merge=0/0, ticks=1227443/0, in_queue=1227443, util=97.83% 00:16:14.267 nvme1n1: ios=8247/0, merge=0/0, ticks=1229218/0, in_queue=1229218, util=98.18% 00:16:14.267 nvme2n1: ios=8146/0, merge=0/0, ticks=1225402/0, in_queue=1225402, util=98.10% 00:16:14.267 nvme3n1: ios=8522/0, merge=0/0, ticks=1224269/0, in_queue=1224269, util=98.15% 00:16:14.267 nvme4n1: ios=8240/0, merge=0/0, ticks=1228446/0, in_queue=1228446, util=98.52% 00:16:14.267 nvme5n1: ios=11605/0, merge=0/0, ticks=1231800/0, in_queue=1231800, util=98.65% 00:16:14.267 nvme6n1: ios=11511/0, merge=0/0, ticks=1235995/0, in_queue=1235995, util=98.62% 00:16:14.267 nvme7n1: ios=8421/0, merge=0/0, ticks=1228279/0, in_queue=1228279, util=98.99% 00:16:14.267 nvme8n1: ios=37334/0, merge=0/0, ticks=1211543/0, in_queue=1211543, util=99.03% 00:16:14.267 nvme9n1: ios=11398/0, merge=0/0, ticks=1236334/0, in_queue=1236334, util=99.07% 00:16:14.267 21:56:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:14.267 [global] 00:16:14.267 thread=1 00:16:14.267 invalidate=1 00:16:14.267 rw=randwrite 00:16:14.267 time_based=1 00:16:14.267 runtime=10 00:16:14.267 ioengine=libaio 00:16:14.267 direct=1 00:16:14.267 bs=262144 00:16:14.267 iodepth=64 00:16:14.267 norandommap=1 00:16:14.267 numjobs=1 00:16:14.267 00:16:14.267 [job0] 00:16:14.267 filename=/dev/nvme0n1 00:16:14.267 [job1] 00:16:14.267 filename=/dev/nvme10n1 00:16:14.267 [job2] 00:16:14.267 filename=/dev/nvme1n1 00:16:14.267 [job3] 00:16:14.267 filename=/dev/nvme2n1 00:16:14.267 [job4] 00:16:14.267 filename=/dev/nvme3n1 00:16:14.267 [job5] 00:16:14.267 filename=/dev/nvme4n1 00:16:14.267 [job6] 00:16:14.267 filename=/dev/nvme5n1 00:16:14.267 [job7] 00:16:14.267 filename=/dev/nvme6n1 00:16:14.267 [job8] 00:16:14.267 filename=/dev/nvme7n1 00:16:14.267 [job9] 00:16:14.267 filename=/dev/nvme8n1 00:16:14.267 [job10] 00:16:14.267 filename=/dev/nvme9n1 00:16:14.267 Could not set queue depth (nvme0n1) 00:16:14.267 Could not set queue depth (nvme10n1) 00:16:14.267 Could not set queue depth (nvme1n1) 00:16:14.267 Could not set queue depth (nvme2n1) 00:16:14.267 Could not set queue depth (nvme3n1) 00:16:14.267 Could not set queue depth (nvme4n1) 00:16:14.267 Could not set queue depth (nvme5n1) 00:16:14.267 Could not set queue depth (nvme6n1) 00:16:14.267 Could not set queue depth (nvme7n1) 00:16:14.267 Could not set queue depth (nvme8n1) 00:16:14.267 Could not set queue depth (nvme9n1) 00:16:14.267 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:14.267 fio-3.35 00:16:14.267 Starting 11 threads 00:16:24.261 00:16:24.261 job0: (groupid=0, jobs=1): err= 0: pid=87610: Wed Jul 24 21:56:28 2024 00:16:24.261 write: IOPS=422, BW=106MiB/s (111MB/s)(1071MiB/10143msec); 0 zone resets 00:16:24.261 slat (usec): min=20, max=12187, avg=2328.18, stdev=3984.30 00:16:24.261 clat (msec): min=14, max=292, avg=149.06, stdev=16.08 00:16:24.261 lat (msec): min=15, max=292, avg=151.38, stdev=15.79 00:16:24.261 clat percentiles (msec): 00:16:24.261 | 1.00th=[ 73], 5.00th=[ 142], 10.00th=[ 142], 20.00th=[ 144], 00:16:24.261 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 153], 00:16:24.261 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:16:24.261 | 99.00th=[ 190], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 284], 00:16:24.261 | 99.99th=[ 292] 00:16:24.261 bw ( KiB/s): min=104448, max=116736, per=6.80%, avg=108083.20, stdev=2377.82, samples=20 00:16:24.261 iops : min= 408, max= 456, avg=422.20, stdev= 9.29, samples=20 00:16:24.261 lat (msec) : 20=0.19%, 50=0.47%, 100=0.56%, 250=98.37%, 500=0.42% 00:16:24.261 cpu : usr=0.77%, sys=1.35%, ctx=5488, majf=0, minf=1 00:16:24.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.261 issued rwts: total=0,4285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.261 job1: (groupid=0, jobs=1): err= 0: pid=87613: Wed Jul 24 21:56:28 2024 00:16:24.261 write: IOPS=762, BW=191MiB/s (200MB/s)(1922MiB/10081msec); 0 zone resets 00:16:24.261 slat (usec): min=16, max=12912, avg=1296.37, stdev=2208.32 00:16:24.261 clat (msec): min=15, max=164, avg=82.61, stdev=11.71 00:16:24.261 lat (msec): min=16, max=164, avg=83.90, stdev=11.68 00:16:24.261 clat percentiles (msec): 00:16:24.261 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 83], 00:16:24.261 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 87], 60.00th=[ 88], 00:16:24.261 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 89], 95.00th=[ 89], 00:16:24.261 | 99.00th=[ 90], 99.50th=[ 111], 99.90th=[ 155], 99.95th=[ 161], 00:16:24.261 | 99.99th=[ 165] 00:16:24.261 bw ( KiB/s): min=184832, max=273955, per=12.28%, avg=195299.30, stdev=24819.44, samples=20 00:16:24.261 iops : min= 722, max= 1070, avg=762.65, stdev=96.86, samples=20 00:16:24.261 lat (msec) : 20=0.05%, 50=0.36%, 100=98.99%, 250=0.60% 00:16:24.261 cpu : usr=1.42%, sys=2.01%, ctx=9003, majf=0, minf=1 00:16:24.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.261 issued rwts: total=0,7687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.261 job2: (groupid=0, jobs=1): err= 0: pid=87623: Wed Jul 24 21:56:28 2024 00:16:24.261 write: IOPS=534, BW=134MiB/s (140MB/s)(1353MiB/10114msec); 0 zone resets 00:16:24.261 slat (usec): min=18, max=16125, avg=1844.65, stdev=3140.51 00:16:24.261 clat (msec): min=18, max=225, avg=117.75, stdev= 9.87 00:16:24.262 lat (msec): min=18, max=225, avg=119.60, stdev= 9.50 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 101], 5.00th=[ 112], 10.00th=[ 112], 20.00th=[ 113], 00:16:24.262 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 120], 60.00th=[ 121], 00:16:24.262 | 70.00th=[ 121], 80.00th=[ 121], 90.00th=[ 122], 95.00th=[ 122], 00:16:24.262 | 99.00th=[ 126], 99.50th=[ 174], 99.90th=[ 220], 99.95th=[ 220], 00:16:24.262 | 99.99th=[ 226] 00:16:24.262 bw ( KiB/s): min=135168, max=139497, per=8.61%, avg=136966.85, stdev=1148.14, samples=20 00:16:24.262 iops : min= 528, max= 544, avg=534.95, stdev= 4.37, samples=20 00:16:24.262 lat (msec) : 20=0.07%, 50=0.37%, 100=0.59%, 250=98.96% 00:16:24.262 cpu : usr=1.09%, sys=1.29%, ctx=6720, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,5410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job3: (groupid=0, jobs=1): err= 0: pid=87624: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=710, BW=178MiB/s (186MB/s)(1789MiB/10076msec); 0 zone resets 00:16:24.262 slat (usec): min=18, max=48067, avg=1371.60, stdev=2431.59 00:16:24.262 clat (msec): min=17, max=181, avg=88.73, stdev=15.38 00:16:24.262 lat (msec): min=17, max=181, avg=90.10, stdev=15.46 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 48], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 84], 00:16:24.262 | 30.00th=[ 87], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 88], 00:16:24.262 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 89], 95.00th=[ 136], 00:16:24.262 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 174], 00:16:24.262 | 99.99th=[ 182] 00:16:24.262 bw ( KiB/s): min=102400, max=188928, per=11.42%, avg=181529.60, stdev=19515.11, samples=20 00:16:24.262 iops : min= 400, max= 738, avg=709.10, stdev=76.23, samples=20 00:16:24.262 lat (msec) : 20=0.06%, 50=1.03%, 100=93.23%, 250=5.68% 00:16:24.262 cpu : usr=1.25%, sys=2.07%, ctx=6706, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,7154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job4: (groupid=0, jobs=1): err= 0: pid=87626: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=731, BW=183MiB/s (192MB/s)(1842MiB/10074msec); 0 zone resets 00:16:24.262 slat (usec): min=16, max=47744, avg=1352.45, stdev=2341.41 00:16:24.262 clat (msec): min=49, max=154, avg=86.16, stdev= 8.83 00:16:24.262 lat (msec): min=49, max=154, avg=87.51, stdev= 8.67 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:16:24.262 | 30.00th=[ 85], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 86], 00:16:24.262 | 70.00th=[ 86], 80.00th=[ 87], 90.00th=[ 87], 95.00th=[ 112], 00:16:24.262 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 150], 00:16:24.262 | 99.99th=[ 155] 00:16:24.262 bw ( KiB/s): min=131072, max=193024, per=11.76%, avg=186956.80, stdev=15997.35, samples=20 00:16:24.262 iops : min= 512, max= 754, avg=730.30, stdev=62.49, samples=20 00:16:24.262 lat (msec) : 50=0.04%, 100=93.23%, 250=6.73% 00:16:24.262 cpu : usr=1.40%, sys=2.01%, ctx=9153, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,7366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job5: (groupid=0, jobs=1): err= 0: pid=87628: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=437, BW=109MiB/s (115MB/s)(1109MiB/10146msec); 0 zone resets 00:16:24.262 slat (usec): min=20, max=11757, avg=2204.19, stdev=3883.79 00:16:24.262 clat (msec): min=4, max=289, avg=144.12, stdev=21.98 00:16:24.262 lat (msec): min=5, max=289, avg=146.32, stdev=22.05 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 41], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 142], 00:16:24.262 | 30.00th=[ 144], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:16:24.262 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:16:24.262 | 99.00th=[ 188], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 279], 00:16:24.262 | 99.99th=[ 292] 00:16:24.262 bw ( KiB/s): min=106496, max=155959, per=7.04%, avg=111964.35, stdev=12813.78, samples=20 00:16:24.262 iops : min= 416, max= 609, avg=437.35, stdev=50.02, samples=20 00:16:24.262 lat (msec) : 10=0.11%, 20=0.27%, 50=0.88%, 100=1.85%, 250=96.48% 00:16:24.262 lat (msec) : 500=0.41% 00:16:24.262 cpu : usr=0.80%, sys=1.16%, ctx=5409, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,4436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job6: (groupid=0, jobs=1): err= 0: pid=87629: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=534, BW=134MiB/s (140MB/s)(1352MiB/10112msec); 0 zone resets 00:16:24.262 slat (usec): min=17, max=18668, avg=1844.57, stdev=3145.52 00:16:24.262 clat (msec): min=14, max=223, avg=117.74, stdev=10.05 00:16:24.262 lat (msec): min=14, max=223, avg=119.59, stdev= 9.69 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 96], 5.00th=[ 112], 10.00th=[ 112], 20.00th=[ 113], 00:16:24.262 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 120], 60.00th=[ 121], 00:16:24.262 | 70.00th=[ 121], 80.00th=[ 121], 90.00th=[ 122], 95.00th=[ 122], 00:16:24.262 | 99.00th=[ 128], 99.50th=[ 171], 99.90th=[ 218], 99.95th=[ 218], 00:16:24.262 | 99.99th=[ 224] 00:16:24.262 bw ( KiB/s): min=135168, max=139776, per=8.61%, avg=136871.45, stdev=1426.07, samples=20 00:16:24.262 iops : min= 528, max= 546, avg=534.65, stdev= 5.57, samples=20 00:16:24.262 lat (msec) : 20=0.15%, 50=0.37%, 100=0.54%, 250=98.95% 00:16:24.262 cpu : usr=1.15%, sys=1.42%, ctx=4907, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,5409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job7: (groupid=0, jobs=1): err= 0: pid=87630: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=734, BW=184MiB/s (193MB/s)(1852MiB/10078msec); 0 zone resets 00:16:24.262 slat (usec): min=17, max=8023, avg=1336.02, stdev=2277.77 00:16:24.262 clat (msec): min=9, max=160, avg=85.72, stdev= 9.58 00:16:24.262 lat (msec): min=9, max=160, avg=87.06, stdev= 9.46 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 82], 00:16:24.262 | 30.00th=[ 85], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 86], 00:16:24.262 | 70.00th=[ 86], 80.00th=[ 87], 90.00th=[ 87], 95.00th=[ 111], 00:16:24.262 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 150], 99.95th=[ 157], 00:16:24.262 | 99.99th=[ 161] 00:16:24.262 bw ( KiB/s): min=149504, max=193536, per=11.82%, avg=187965.40, stdev=12492.05, samples=20 00:16:24.262 iops : min= 584, max= 756, avg=734.20, stdev=48.91, samples=20 00:16:24.262 lat (msec) : 10=0.05%, 20=0.11%, 50=0.32%, 100=93.26%, 250=6.25% 00:16:24.262 cpu : usr=1.23%, sys=1.91%, ctx=10162, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,7406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job8: (groupid=0, jobs=1): err= 0: pid=87631: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=420, BW=105MiB/s (110MB/s)(1067MiB/10147msec); 0 zone resets 00:16:24.262 slat (usec): min=21, max=45540, avg=2338.76, stdev=4048.19 00:16:24.262 clat (msec): min=6, max=293, avg=149.79, stdev=16.60 00:16:24.262 lat (msec): min=6, max=293, avg=152.12, stdev=16.32 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 71], 5.00th=[ 142], 10.00th=[ 142], 20.00th=[ 144], 00:16:24.262 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 153], 00:16:24.262 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 157], 00:16:24.262 | 99.00th=[ 190], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 284], 00:16:24.262 | 99.99th=[ 296] 00:16:24.262 bw ( KiB/s): min=104239, max=108761, per=6.77%, avg=107676.70, stdev=1236.89, samples=20 00:16:24.262 iops : min= 407, max= 424, avg=420.30, stdev= 4.81, samples=20 00:16:24.262 lat (msec) : 10=0.14%, 50=0.56%, 100=0.66%, 250=98.22%, 500=0.42% 00:16:24.262 cpu : usr=0.83%, sys=1.25%, ctx=5907, majf=0, minf=1 00:16:24.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.262 issued rwts: total=0,4267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.262 job9: (groupid=0, jobs=1): err= 0: pid=87632: Wed Jul 24 21:56:28 2024 00:16:24.262 write: IOPS=532, BW=133MiB/s (139MB/s)(1345MiB/10107msec); 0 zone resets 00:16:24.262 slat (usec): min=19, max=30465, avg=1854.63, stdev=3185.70 00:16:24.262 clat (msec): min=33, max=224, avg=118.38, stdev= 8.90 00:16:24.262 lat (msec): min=33, max=224, avg=120.24, stdev= 8.43 00:16:24.262 clat percentiles (msec): 00:16:24.262 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 113], 20.00th=[ 113], 00:16:24.262 | 30.00th=[ 120], 40.00th=[ 120], 50.00th=[ 120], 60.00th=[ 121], 00:16:24.262 | 70.00th=[ 121], 80.00th=[ 121], 90.00th=[ 122], 95.00th=[ 123], 00:16:24.262 | 99.00th=[ 138], 99.50th=[ 171], 99.90th=[ 218], 99.95th=[ 218], 00:16:24.262 | 99.99th=[ 226] 00:16:24.262 bw ( KiB/s): min=124928, max=137728, per=8.56%, avg=136064.00, stdev=2833.10, samples=20 00:16:24.263 iops : min= 488, max= 538, avg=531.50, stdev=11.07, samples=20 00:16:24.263 lat (msec) : 50=0.22%, 100=0.52%, 250=99.26% 00:16:24.263 cpu : usr=0.95%, sys=1.48%, ctx=6963, majf=0, minf=1 00:16:24.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:24.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.263 issued rwts: total=0,5378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.263 job10: (groupid=0, jobs=1): err= 0: pid=87633: Wed Jul 24 21:56:28 2024 00:16:24.263 write: IOPS=417, BW=104MiB/s (109MB/s)(1059MiB/10146msec); 0 zone resets 00:16:24.263 slat (usec): min=19, max=73008, avg=2357.44, stdev=4164.51 00:16:24.263 clat (msec): min=75, max=290, avg=150.94, stdev=11.94 00:16:24.263 lat (msec): min=75, max=290, avg=153.30, stdev=11.35 00:16:24.263 clat percentiles (msec): 00:16:24.263 | 1.00th=[ 140], 5.00th=[ 142], 10.00th=[ 142], 20.00th=[ 144], 00:16:24.263 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 153], 00:16:24.263 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 155], 95.00th=[ 157], 00:16:24.263 | 99.00th=[ 203], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:16:24.263 | 99.99th=[ 292] 00:16:24.263 bw ( KiB/s): min=90112, max=109056, per=6.71%, avg=106777.60, stdev=4117.75, samples=20 00:16:24.263 iops : min= 352, max= 426, avg=417.10, stdev=16.08, samples=20 00:16:24.263 lat (msec) : 100=0.28%, 250=99.29%, 500=0.43% 00:16:24.263 cpu : usr=0.80%, sys=1.27%, ctx=5397, majf=0, minf=1 00:16:24.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:24.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:24.263 issued rwts: total=0,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.263 00:16:24.263 Run status group 0 (all jobs): 00:16:24.263 WRITE: bw=1553MiB/s (1628MB/s), 104MiB/s-191MiB/s (109MB/s-200MB/s), io=15.4GiB (16.5GB), run=10074-10147msec 00:16:24.263 00:16:24.263 Disk stats (read/write): 00:16:24.263 nvme0n1: ios=50/8447, merge=0/0, ticks=58/1214252, in_queue=1214310, util=98.02% 00:16:24.263 nvme10n1: ios=49/15249, merge=0/0, ticks=31/1217437, in_queue=1217468, util=98.20% 00:16:24.263 nvme1n1: ios=34/10692, merge=0/0, ticks=42/1216737, in_queue=1216779, util=98.28% 00:16:24.263 nvme2n1: ios=25/14175, merge=0/0, ticks=31/1216965, in_queue=1216996, util=98.21% 00:16:24.263 nvme3n1: ios=13/14574, merge=0/0, ticks=25/1215184, in_queue=1215209, util=98.09% 00:16:24.263 nvme4n1: ios=0/8737, merge=0/0, ticks=0/1214154, in_queue=1214154, util=98.38% 00:16:24.263 nvme5n1: ios=0/10687, merge=0/0, ticks=0/1215903, in_queue=1215903, util=98.54% 00:16:24.263 nvme6n1: ios=0/14674, merge=0/0, ticks=0/1216718, in_queue=1216718, util=98.53% 00:16:24.263 nvme7n1: ios=0/8415, merge=0/0, ticks=0/1214724, in_queue=1214724, util=98.85% 00:16:24.263 nvme8n1: ios=0/10625, merge=0/0, ticks=0/1214965, in_queue=1214965, util=98.84% 00:16:24.263 nvme9n1: ios=0/8332, merge=0/0, ticks=0/1212958, in_queue=1212958, util=98.85% 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.263 21:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:24.263 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:16:24.263 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:24.264 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:24.264 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:24.264 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:24.264 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:24.264 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.264 rmmod nvme_tcp 00:16:24.264 rmmod nvme_fabrics 00:16:24.264 rmmod nvme_keyring 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 86951 ']' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 86951 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 86951 ']' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 86951 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86951 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:24.264 killing process with pid 86951 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86951' 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 86951 00:16:24.264 21:56:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 86951 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.831 00:16:24.831 real 0m48.192s 00:16:24.831 user 2m37.929s 00:16:24.831 sys 0m34.276s 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:24.831 ************************************ 00:16:24.831 21:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:24.831 END TEST nvmf_multiconnection 00:16:24.831 ************************************ 00:16:24.831 21:56:30 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:24.831 21:56:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:24.831 21:56:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.831 21:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.831 ************************************ 00:16:24.831 START TEST nvmf_initiator_timeout 00:16:24.831 ************************************ 00:16:24.831 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:24.831 * Looking for test storage... 00:16:24.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.832 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.090 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.091 Cannot find device "nvmf_tgt_br" 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.091 Cannot find device "nvmf_tgt_br2" 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.091 Cannot find device "nvmf_tgt_br" 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.091 Cannot find device "nvmf_tgt_br2" 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.091 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:16:25.350 00:16:25.350 --- 10.0.0.2 ping statistics --- 00:16:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.350 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:25.350 00:16:25.350 --- 10.0.0.3 ping statistics --- 00:16:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.350 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:25.350 00:16:25.350 --- 10.0.0.1 ping statistics --- 00:16:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.350 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=87994 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 87994 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 87994 ']' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.350 21:56:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 [2024-07-24 21:56:30.961211] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:25.350 [2024-07-24 21:56:30.961306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.609 [2024-07-24 21:56:31.103314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.609 [2024-07-24 21:56:31.188485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.609 [2024-07-24 21:56:31.188757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.609 [2024-07-24 21:56:31.188973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.609 [2024-07-24 21:56:31.189143] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.609 [2024-07-24 21:56:31.189299] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.609 [2024-07-24 21:56:31.189604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.609 [2024-07-24 21:56:31.189758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.609 [2024-07-24 21:56:31.189821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.609 [2024-07-24 21:56:31.189824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.609 [2024-07-24 21:56:31.247678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:26.543 21:56:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.543 21:56:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:16:26.543 21:56:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.543 21:56:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.543 21:56:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.543 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.543 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:26.543 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.543 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.543 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.543 Malloc0 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 Delay0 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 [2024-07-24 21:56:32.054900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:26.544 [2024-07-24 21:56:32.083213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:26.544 21:56:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88064 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:29.096 21:56:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:29.096 [global] 00:16:29.096 thread=1 00:16:29.096 invalidate=1 00:16:29.096 rw=write 00:16:29.096 time_based=1 00:16:29.096 runtime=60 00:16:29.096 ioengine=libaio 00:16:29.096 direct=1 00:16:29.096 bs=4096 00:16:29.096 iodepth=1 00:16:29.096 norandommap=0 00:16:29.096 numjobs=1 00:16:29.096 00:16:29.096 verify_dump=1 00:16:29.096 verify_backlog=512 00:16:29.096 verify_state_save=0 00:16:29.096 do_verify=1 00:16:29.096 verify=crc32c-intel 00:16:29.096 [job0] 00:16:29.096 filename=/dev/nvme0n1 00:16:29.096 Could not set queue depth (nvme0n1) 00:16:29.096 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.096 fio-3.35 00:16:29.096 Starting 1 thread 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.624 true 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.624 true 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.624 true 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.624 true 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.624 21:56:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 true 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 true 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 true 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:34.912 true 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:34.912 21:56:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88064 00:17:31.169 00:17:31.169 job0: (groupid=0, jobs=1): err= 0: pid=88085: Wed Jul 24 21:57:34 2024 00:17:31.169 read: IOPS=818, BW=3275KiB/s (3353kB/s)(192MiB/60000msec) 00:17:31.169 slat (usec): min=11, max=136, avg=14.57, stdev= 3.50 00:17:31.169 clat (usec): min=162, max=556, avg=201.85, stdev=17.56 00:17:31.169 lat (usec): min=175, max=584, avg=216.43, stdev=18.23 00:17:31.169 clat percentiles (usec): 00:17:31.169 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:17:31.169 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:17:31.169 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 233], 00:17:31.169 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 314], 99.95th=[ 359], 00:17:31.169 | 99.99th=[ 461] 00:17:31.169 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:17:31.169 slat (usec): min=13, max=12086, avg=21.42, stdev=64.94 00:17:31.169 clat (usec): min=3, max=40586k, avg=979.62, stdev=183063.69 00:17:31.169 lat (usec): min=137, max=40586k, avg=1001.04, stdev=183063.71 00:17:31.169 clat percentiles (usec): 00:17:31.169 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:17:31.169 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:17:31.169 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:17:31.169 | 99.00th=[ 202], 99.50th=[ 215], 99.90th=[ 277], 99.95th=[ 355], 00:17:31.169 | 99.99th=[ 725] 00:17:31.169 bw ( KiB/s): min= 1976, max=11704, per=100.00%, avg=9871.82, stdev=1682.35, samples=39 00:17:31.169 iops : min= 494, max= 2926, avg=2467.95, stdev=420.58, samples=39 00:17:31.169 lat (usec) : 4=0.01%, 250=99.36%, 500=0.62%, 750=0.01%, 1000=0.01% 00:17:31.169 lat (msec) : >=2000=0.01% 00:17:31.169 cpu : usr=0.59%, sys=2.28%, ctx=98278, majf=0, minf=2 00:17:31.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.169 issued rwts: total=49119,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:31.169 00:17:31.169 Run status group 0 (all jobs): 00:17:31.169 READ: bw=3275KiB/s (3353kB/s), 3275KiB/s-3275KiB/s (3353kB/s-3353kB/s), io=192MiB (201MB), run=60000-60000msec 00:17:31.169 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:17:31.169 00:17:31.169 Disk stats (read/write): 00:17:31.169 nvme0n1: ios=48871/49152, merge=0/0, ticks=10292/8147, in_queue=18439, util=99.60% 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.169 nvmf hotplug test: fio successful as expected 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.169 rmmod nvme_tcp 00:17:31.169 rmmod nvme_fabrics 00:17:31.169 rmmod nvme_keyring 00:17:31.169 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 87994 ']' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 87994 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 87994 ']' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 87994 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87994 00:17:31.170 killing process with pid 87994 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87994' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 87994 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 87994 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:31.170 00:17:31.170 real 1m4.495s 00:17:31.170 user 3m53.563s 00:17:31.170 sys 0m21.503s 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.170 21:57:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:31.170 ************************************ 00:17:31.170 END TEST nvmf_initiator_timeout 00:17:31.170 ************************************ 00:17:31.170 21:57:34 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:31.170 21:57:34 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:31.170 21:57:34 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.170 21:57:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.170 21:57:35 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:31.170 21:57:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:31.170 21:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.170 21:57:35 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:17:31.170 21:57:35 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:31.170 21:57:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:31.170 21:57:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.170 21:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.170 ************************************ 00:17:31.170 START TEST nvmf_identify 00:17:31.170 ************************************ 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:31.170 * Looking for test storage... 00:17:31.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.170 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.171 Cannot find device "nvmf_tgt_br" 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.171 Cannot find device "nvmf_tgt_br2" 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.171 Cannot find device "nvmf_tgt_br" 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.171 Cannot find device "nvmf_tgt_br2" 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:17:31.171 00:17:31.171 --- 10.0.0.2 ping statistics --- 00:17:31.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.171 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.171 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.171 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:17:31.171 00:17:31.171 --- 10.0.0.3 ping statistics --- 00:17:31.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.171 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:17:31.171 00:17:31.171 --- 10.0.0.1 ping statistics --- 00:17:31.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.171 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88917 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88917 00:17:31.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 88917 ']' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.171 21:57:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 [2024-07-24 21:57:35.538879] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:31.171 [2024-07-24 21:57:35.539874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.171 [2024-07-24 21:57:35.684653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.171 [2024-07-24 21:57:35.773335] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.171 [2024-07-24 21:57:35.773710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.171 [2024-07-24 21:57:35.773732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.171 [2024-07-24 21:57:35.773742] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.171 [2024-07-24 21:57:35.773749] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.171 [2024-07-24 21:57:35.773842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.171 [2024-07-24 21:57:35.773950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.171 [2024-07-24 21:57:35.774150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.171 [2024-07-24 21:57:35.774159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.171 [2024-07-24 21:57:35.830872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.171 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.171 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:17:31.171 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 [2024-07-24 21:57:36.458148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 Malloc0 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 [2024-07-24 21:57:36.565741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 [ 00:17:31.172 { 00:17:31.172 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:31.172 "subtype": "Discovery", 00:17:31.172 "listen_addresses": [ 00:17:31.172 { 00:17:31.172 "trtype": "TCP", 00:17:31.172 "adrfam": "IPv4", 00:17:31.172 "traddr": "10.0.0.2", 00:17:31.172 "trsvcid": "4420" 00:17:31.172 } 00:17:31.172 ], 00:17:31.172 "allow_any_host": true, 00:17:31.172 "hosts": [] 00:17:31.172 }, 00:17:31.172 { 00:17:31.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.172 "subtype": "NVMe", 00:17:31.172 "listen_addresses": [ 00:17:31.172 { 00:17:31.172 "trtype": "TCP", 00:17:31.172 "adrfam": "IPv4", 00:17:31.172 "traddr": "10.0.0.2", 00:17:31.172 "trsvcid": "4420" 00:17:31.172 } 00:17:31.172 ], 00:17:31.172 "allow_any_host": true, 00:17:31.172 "hosts": [], 00:17:31.172 "serial_number": "SPDK00000000000001", 00:17:31.172 "model_number": "SPDK bdev Controller", 00:17:31.172 "max_namespaces": 32, 00:17:31.172 "min_cntlid": 1, 00:17:31.172 "max_cntlid": 65519, 00:17:31.172 "namespaces": [ 00:17:31.172 { 00:17:31.172 "nsid": 1, 00:17:31.172 "bdev_name": "Malloc0", 00:17:31.172 "name": "Malloc0", 00:17:31.172 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:31.172 "eui64": "ABCDEF0123456789", 00:17:31.172 "uuid": "affc542a-eba8-4edc-95af-0b05f670409d" 00:17:31.172 } 00:17:31.172 ] 00:17:31.172 } 00:17:31.172 ] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:31.172 [2024-07-24 21:57:36.625234] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:31.172 [2024-07-24 21:57:36.625281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88952 ] 00:17:31.172 [2024-07-24 21:57:36.762692] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:31.172 [2024-07-24 21:57:36.762752] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:31.172 [2024-07-24 21:57:36.762760] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:31.172 [2024-07-24 21:57:36.762773] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:31.172 [2024-07-24 21:57:36.762783] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:31.172 [2024-07-24 21:57:36.762915] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:31.172 [2024-07-24 21:57:36.762961] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4de970 0 00:17:31.172 [2024-07-24 21:57:36.767629] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:31.172 [2024-07-24 21:57:36.767654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:31.172 [2024-07-24 21:57:36.767660] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:31.172 [2024-07-24 21:57:36.767664] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:31.172 [2024-07-24 21:57:36.767711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.767719] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.767723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.172 [2024-07-24 21:57:36.767738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:31.172 [2024-07-24 21:57:36.767769] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.172 [2024-07-24 21:57:36.775631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.172 [2024-07-24 21:57:36.775652] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.172 [2024-07-24 21:57:36.775657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.172 [2024-07-24 21:57:36.775677] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:31.172 [2024-07-24 21:57:36.775685] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:31.172 [2024-07-24 21:57:36.775692] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:31.172 [2024-07-24 21:57:36.775711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.172 [2024-07-24 21:57:36.775730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.172 [2024-07-24 21:57:36.775759] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.172 [2024-07-24 21:57:36.775825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.172 [2024-07-24 21:57:36.775832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.172 [2024-07-24 21:57:36.775836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.172 [2024-07-24 21:57:36.775847] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:31.172 [2024-07-24 21:57:36.775855] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:31.172 [2024-07-24 21:57:36.775863] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.172 [2024-07-24 21:57:36.775879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.172 [2024-07-24 21:57:36.775899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.172 [2024-07-24 21:57:36.775955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.172 [2024-07-24 21:57:36.775962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.172 [2024-07-24 21:57:36.775966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.172 [2024-07-24 21:57:36.775970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.172 [2024-07-24 21:57:36.775976] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:31.172 [2024-07-24 21:57:36.775985] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.775993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.775997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.173 [2024-07-24 21:57:36.776029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.776078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.173 [2024-07-24 21:57:36.776084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.173 [2024-07-24 21:57:36.776088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776093] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.173 [2024-07-24 21:57:36.776099] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.776132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776148] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.173 [2024-07-24 21:57:36.776182] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.776238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.173 [2024-07-24 21:57:36.776245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.173 [2024-07-24 21:57:36.776249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.173 [2024-07-24 21:57:36.776258] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:31.173 [2024-07-24 21:57:36.776264] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.776273] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.776379] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:31.173 [2024-07-24 21:57:36.776390] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.776401] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.173 [2024-07-24 21:57:36.776438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.776491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.173 [2024-07-24 21:57:36.776498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.173 [2024-07-24 21:57:36.776502] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.173 [2024-07-24 21:57:36.776512] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:31.173 [2024-07-24 21:57:36.776522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776531] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.173 [2024-07-24 21:57:36.776557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.776602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.173 [2024-07-24 21:57:36.776622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.173 [2024-07-24 21:57:36.776629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776633] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.173 [2024-07-24 21:57:36.776638] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:31.173 [2024-07-24 21:57:36.776644] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:31.173 [2024-07-24 21:57:36.776653] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:31.173 [2024-07-24 21:57:36.776665] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:31.173 [2024-07-24 21:57:36.776676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.173 [2024-07-24 21:57:36.776710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.776799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.173 [2024-07-24 21:57:36.776806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.173 [2024-07-24 21:57:36.776810] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776814] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4de970): datao=0, datal=4096, cccid=0 00:17:31.173 [2024-07-24 21:57:36.776819] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5171d0) on tqpair(0x4de970): expected_datao=0, payload_size=4096 00:17:31.173 [2024-07-24 21:57:36.776825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776833] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776837] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776846] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.173 [2024-07-24 21:57:36.776852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.173 [2024-07-24 21:57:36.776856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.173 [2024-07-24 21:57:36.776869] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:31.173 [2024-07-24 21:57:36.776874] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:31.173 [2024-07-24 21:57:36.776879] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:31.173 [2024-07-24 21:57:36.776884] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:31.173 [2024-07-24 21:57:36.776889] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:31.173 [2024-07-24 21:57:36.776895] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:31.173 [2024-07-24 21:57:36.776908] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:31.173 [2024-07-24 21:57:36.776917] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.173 [2024-07-24 21:57:36.776925] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.173 [2024-07-24 21:57:36.776933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.173 [2024-07-24 21:57:36.776953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.173 [2024-07-24 21:57:36.777015] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.174 [2024-07-24 21:57:36.777024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.174 [2024-07-24 21:57:36.777028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5171d0) on tqpair=0x4de970 00:17:31.174 [2024-07-24 21:57:36.777040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777048] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.174 [2024-07-24 21:57:36.777062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.174 [2024-07-24 21:57:36.777082] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.174 [2024-07-24 21:57:36.777102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777106] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.174 [2024-07-24 21:57:36.777121] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:31.174 [2024-07-24 21:57:36.777130] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:31.174 [2024-07-24 21:57:36.777138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777142] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.174 [2024-07-24 21:57:36.777183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5171d0, cid 0, qid 0 00:17:31.174 [2024-07-24 21:57:36.777193] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517330, cid 1, qid 0 00:17:31.174 [2024-07-24 21:57:36.777198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517490, cid 2, qid 0 00:17:31.174 [2024-07-24 21:57:36.777203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.174 [2024-07-24 21:57:36.777207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517750, cid 4, qid 0 00:17:31.174 [2024-07-24 21:57:36.777292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.174 [2024-07-24 21:57:36.777299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.174 [2024-07-24 21:57:36.777303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x517750) on tqpair=0x4de970 00:17:31.174 [2024-07-24 21:57:36.777313] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:31.174 [2024-07-24 21:57:36.777319] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:31.174 [2024-07-24 21:57:36.777330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.174 [2024-07-24 21:57:36.777362] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517750, cid 4, qid 0 00:17:31.174 [2024-07-24 21:57:36.777421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.174 [2024-07-24 21:57:36.777429] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.174 [2024-07-24 21:57:36.777432] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777436] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4de970): datao=0, datal=4096, cccid=4 00:17:31.174 [2024-07-24 21:57:36.777441] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517750) on tqpair(0x4de970): expected_datao=0, payload_size=4096 00:17:31.174 [2024-07-24 21:57:36.777446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777454] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777458] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.174 [2024-07-24 21:57:36.777472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.174 [2024-07-24 21:57:36.777476] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777480] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x517750) on tqpair=0x4de970 00:17:31.174 [2024-07-24 21:57:36.777493] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:31.174 [2024-07-24 21:57:36.777520] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.174 [2024-07-24 21:57:36.777542] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777546] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.174 [2024-07-24 21:57:36.777550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4de970) 00:17:31.174 [2024-07-24 21:57:36.777556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.175 [2024-07-24 21:57:36.777582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517750, cid 4, qid 0 00:17:31.175 [2024-07-24 21:57:36.777590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5178b0, cid 5, qid 0 00:17:31.175 [2024-07-24 21:57:36.777711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.175 [2024-07-24 21:57:36.777720] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.175 [2024-07-24 21:57:36.777724] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777728] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4de970): datao=0, datal=1024, cccid=4 00:17:31.175 [2024-07-24 21:57:36.777733] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517750) on tqpair(0x4de970): expected_datao=0, payload_size=1024 00:17:31.175 [2024-07-24 21:57:36.777737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777744] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777748] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.175 [2024-07-24 21:57:36.777760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.175 [2024-07-24 21:57:36.777763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5178b0) on tqpair=0x4de970 00:17:31.175 [2024-07-24 21:57:36.777786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.175 [2024-07-24 21:57:36.777795] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.175 [2024-07-24 21:57:36.777798] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x517750) on tqpair=0x4de970 00:17:31.175 [2024-07-24 21:57:36.777815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4de970) 00:17:31.175 [2024-07-24 21:57:36.777827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.175 [2024-07-24 21:57:36.777853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517750, cid 4, qid 0 00:17:31.175 [2024-07-24 21:57:36.777921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.175 [2024-07-24 21:57:36.777929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.175 [2024-07-24 21:57:36.777932] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777936] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4de970): datao=0, datal=3072, cccid=4 00:17:31.175 [2024-07-24 21:57:36.777941] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517750) on tqpair(0x4de970): expected_datao=0, payload_size=3072 00:17:31.175 [2024-07-24 21:57:36.777946] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777953] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777957] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777965] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.175 [2024-07-24 21:57:36.777971] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.175 [2024-07-24 21:57:36.777975] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x517750) on tqpair=0x4de970 00:17:31.175 [2024-07-24 21:57:36.777989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.777994] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4de970) 00:17:31.175 [2024-07-24 21:57:36.778001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.175 [2024-07-24 21:57:36.778025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x517750, cid 4, qid 0 00:17:31.175 [2024-07-24 21:57:36.778091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.175 [2024-07-24 21:57:36.778099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.175 [2024-07-24 21:57:36.778102] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.778106] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4de970): datao=0, datal=8, cccid=4 00:17:31.175 [2024-07-24 21:57:36.778111] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x517750) on tqpair(0x4de970): expected_datao=0, payload_size=8 00:17:31.175 [2024-07-24 21:57:36.778116] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.778123] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.778126] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.778141] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.175 [2024-07-24 21:57:36.778149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.175 [2024-07-24 21:57:36.778153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.175 [2024-07-24 21:57:36.778157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x517750) on tqpair=0x4de970 00:17:31.175 ===================================================== 00:17:31.175 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:31.175 ===================================================== 00:17:31.175 Controller Capabilities/Features 00:17:31.175 ================================ 00:17:31.175 Vendor ID: 0000 00:17:31.175 Subsystem Vendor ID: 0000 00:17:31.175 Serial Number: .................... 00:17:31.175 Model Number: ........................................ 00:17:31.175 Firmware Version: 24.05.1 00:17:31.175 Recommended Arb Burst: 0 00:17:31.175 IEEE OUI Identifier: 00 00 00 00:17:31.175 Multi-path I/O 00:17:31.175 May have multiple subsystem ports: No 00:17:31.175 May have multiple controllers: No 00:17:31.175 Associated with SR-IOV VF: No 00:17:31.175 Max Data Transfer Size: 131072 00:17:31.175 Max Number of Namespaces: 0 00:17:31.175 Max Number of I/O Queues: 1024 00:17:31.175 NVMe Specification Version (VS): 1.3 00:17:31.175 NVMe Specification Version (Identify): 1.3 00:17:31.175 Maximum Queue Entries: 128 00:17:31.175 Contiguous Queues Required: Yes 00:17:31.175 Arbitration Mechanisms Supported 00:17:31.175 Weighted Round Robin: Not Supported 00:17:31.175 Vendor Specific: Not Supported 00:17:31.175 Reset Timeout: 15000 ms 00:17:31.175 Doorbell Stride: 4 bytes 00:17:31.176 NVM Subsystem Reset: Not Supported 00:17:31.176 Command Sets Supported 00:17:31.176 NVM Command Set: Supported 00:17:31.176 Boot Partition: Not Supported 00:17:31.176 Memory Page Size Minimum: 4096 bytes 00:17:31.176 Memory Page Size Maximum: 4096 bytes 00:17:31.176 Persistent Memory Region: Not Supported 00:17:31.176 Optional Asynchronous Events Supported 00:17:31.176 Namespace Attribute Notices: Not Supported 00:17:31.176 Firmware Activation Notices: Not Supported 00:17:31.176 ANA Change Notices: Not Supported 00:17:31.176 PLE Aggregate Log Change Notices: Not Supported 00:17:31.176 LBA Status Info Alert Notices: Not Supported 00:17:31.176 EGE Aggregate Log Change Notices: Not Supported 00:17:31.176 Normal NVM Subsystem Shutdown event: Not Supported 00:17:31.176 Zone Descriptor Change Notices: Not Supported 00:17:31.176 Discovery Log Change Notices: Supported 00:17:31.176 Controller Attributes 00:17:31.176 128-bit Host Identifier: Not Supported 00:17:31.176 Non-Operational Permissive Mode: Not Supported 00:17:31.176 NVM Sets: Not Supported 00:17:31.176 Read Recovery Levels: Not Supported 00:17:31.176 Endurance Groups: Not Supported 00:17:31.176 Predictable Latency Mode: Not Supported 00:17:31.176 Traffic Based Keep ALive: Not Supported 00:17:31.176 Namespace Granularity: Not Supported 00:17:31.176 SQ Associations: Not Supported 00:17:31.176 UUID List: Not Supported 00:17:31.176 Multi-Domain Subsystem: Not Supported 00:17:31.176 Fixed Capacity Management: Not Supported 00:17:31.176 Variable Capacity Management: Not Supported 00:17:31.176 Delete Endurance Group: Not Supported 00:17:31.176 Delete NVM Set: Not Supported 00:17:31.176 Extended LBA Formats Supported: Not Supported 00:17:31.176 Flexible Data Placement Supported: Not Supported 00:17:31.176 00:17:31.176 Controller Memory Buffer Support 00:17:31.176 ================================ 00:17:31.176 Supported: No 00:17:31.176 00:17:31.176 Persistent Memory Region Support 00:17:31.176 ================================ 00:17:31.176 Supported: No 00:17:31.176 00:17:31.176 Admin Command Set Attributes 00:17:31.176 ============================ 00:17:31.176 Security Send/Receive: Not Supported 00:17:31.176 Format NVM: Not Supported 00:17:31.176 Firmware Activate/Download: Not Supported 00:17:31.176 Namespace Management: Not Supported 00:17:31.176 Device Self-Test: Not Supported 00:17:31.176 Directives: Not Supported 00:17:31.176 NVMe-MI: Not Supported 00:17:31.176 Virtualization Management: Not Supported 00:17:31.176 Doorbell Buffer Config: Not Supported 00:17:31.176 Get LBA Status Capability: Not Supported 00:17:31.176 Command & Feature Lockdown Capability: Not Supported 00:17:31.176 Abort Command Limit: 1 00:17:31.176 Async Event Request Limit: 4 00:17:31.176 Number of Firmware Slots: N/A 00:17:31.176 Firmware Slot 1 Read-Only: N/A 00:17:31.176 Firmware Activation Without Reset: N/A 00:17:31.176 Multiple Update Detection Support: N/A 00:17:31.176 Firmware Update Granularity: No Information Provided 00:17:31.176 Per-Namespace SMART Log: No 00:17:31.176 Asymmetric Namespace Access Log Page: Not Supported 00:17:31.176 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:31.176 Command Effects Log Page: Not Supported 00:17:31.176 Get Log Page Extended Data: Supported 00:17:31.176 Telemetry Log Pages: Not Supported 00:17:31.176 Persistent Event Log Pages: Not Supported 00:17:31.176 Supported Log Pages Log Page: May Support 00:17:31.176 Commands Supported & Effects Log Page: Not Supported 00:17:31.176 Feature Identifiers & Effects Log Page:May Support 00:17:31.176 NVMe-MI Commands & Effects Log Page: May Support 00:17:31.176 Data Area 4 for Telemetry Log: Not Supported 00:17:31.176 Error Log Page Entries Supported: 128 00:17:31.176 Keep Alive: Not Supported 00:17:31.176 00:17:31.176 NVM Command Set Attributes 00:17:31.176 ========================== 00:17:31.176 Submission Queue Entry Size 00:17:31.176 Max: 1 00:17:31.176 Min: 1 00:17:31.176 Completion Queue Entry Size 00:17:31.176 Max: 1 00:17:31.176 Min: 1 00:17:31.176 Number of Namespaces: 0 00:17:31.176 Compare Command: Not Supported 00:17:31.176 Write Uncorrectable Command: Not Supported 00:17:31.176 Dataset Management Command: Not Supported 00:17:31.176 Write Zeroes Command: Not Supported 00:17:31.176 Set Features Save Field: Not Supported 00:17:31.176 Reservations: Not Supported 00:17:31.176 Timestamp: Not Supported 00:17:31.176 Copy: Not Supported 00:17:31.176 Volatile Write Cache: Not Present 00:17:31.176 Atomic Write Unit (Normal): 1 00:17:31.176 Atomic Write Unit (PFail): 1 00:17:31.176 Atomic Compare & Write Unit: 1 00:17:31.176 Fused Compare & Write: Supported 00:17:31.176 Scatter-Gather List 00:17:31.176 SGL Command Set: Supported 00:17:31.176 SGL Keyed: Supported 00:17:31.176 SGL Bit Bucket Descriptor: Not Supported 00:17:31.176 SGL Metadata Pointer: Not Supported 00:17:31.176 Oversized SGL: Not Supported 00:17:31.176 SGL Metadata Address: Not Supported 00:17:31.176 SGL Offset: Supported 00:17:31.176 Transport SGL Data Block: Not Supported 00:17:31.176 Replay Protected Memory Block: Not Supported 00:17:31.176 00:17:31.176 Firmware Slot Information 00:17:31.176 ========================= 00:17:31.176 Active slot: 0 00:17:31.176 00:17:31.176 00:17:31.176 Error Log 00:17:31.176 ========= 00:17:31.176 00:17:31.176 Active Namespaces 00:17:31.176 ================= 00:17:31.176 Discovery Log Page 00:17:31.176 ================== 00:17:31.176 Generation Counter: 2 00:17:31.176 Number of Records: 2 00:17:31.177 Record Format: 0 00:17:31.177 00:17:31.177 Discovery Log Entry 0 00:17:31.177 ---------------------- 00:17:31.177 Transport Type: 3 (TCP) 00:17:31.177 Address Family: 1 (IPv4) 00:17:31.177 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:31.177 Entry Flags: 00:17:31.177 Duplicate Returned Information: 1 00:17:31.177 Explicit Persistent Connection Support for Discovery: 1 00:17:31.177 Transport Requirements: 00:17:31.177 Secure Channel: Not Required 00:17:31.177 Port ID: 0 (0x0000) 00:17:31.177 Controller ID: 65535 (0xffff) 00:17:31.177 Admin Max SQ Size: 128 00:17:31.177 Transport Service Identifier: 4420 00:17:31.177 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:31.177 Transport Address: 10.0.0.2 00:17:31.177 Discovery Log Entry 1 00:17:31.177 ---------------------- 00:17:31.177 Transport Type: 3 (TCP) 00:17:31.177 Address Family: 1 (IPv4) 00:17:31.177 Subsystem Type: 2 (NVM Subsystem) 00:17:31.177 Entry Flags: 00:17:31.177 Duplicate Returned Information: 0 00:17:31.177 Explicit Persistent Connection Support for Discovery: 0 00:17:31.177 Transport Requirements: 00:17:31.177 Secure Channel: Not Required 00:17:31.177 Port ID: 0 (0x0000) 00:17:31.177 Controller ID: 65535 (0xffff) 00:17:31.177 Admin Max SQ Size: 128 00:17:31.177 Transport Service Identifier: 4420 00:17:31.177 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:31.177 Transport Address: 10.0.0.2 [2024-07-24 21:57:36.778254] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:31.177 [2024-07-24 21:57:36.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.177 [2024-07-24 21:57:36.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.177 [2024-07-24 21:57:36.778284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.177 [2024-07-24 21:57:36.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.177 [2024-07-24 21:57:36.778300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.778411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778419] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778522] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.778540] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:31.177 [2024-07-24 21:57:36.778545] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:31.177 [2024-07-24 21:57:36.778556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778565] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778651] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778660] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778664] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.778680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778688] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778717] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778771] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.778787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.778895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.177 [2024-07-24 21:57:36.778911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.177 [2024-07-24 21:57:36.778929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.177 [2024-07-24 21:57:36.778981] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.177 [2024-07-24 21:57:36.778988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.177 [2024-07-24 21:57:36.778992] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.778996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.177 [2024-07-24 21:57:36.779007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.177 [2024-07-24 21:57:36.779011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779015] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.779084] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.779091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.779094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779099] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.779109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.779189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.779201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.779206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779211] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.779227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.779330] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.779337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.779342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.779357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779393] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.779443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.779450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.779453] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.779468] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.779548] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.779555] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.779559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779563] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.779574] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779579] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.779583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.779590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.779608] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.783644] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.783653] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.783656] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.783661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.783676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.783681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.783685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4de970) 00:17:31.178 [2024-07-24 21:57:36.783694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.178 [2024-07-24 21:57:36.783720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5175f0, cid 3, qid 0 00:17:31.178 [2024-07-24 21:57:36.783785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.178 [2024-07-24 21:57:36.783793] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.178 [2024-07-24 21:57:36.783796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.178 [2024-07-24 21:57:36.783801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5175f0) on tqpair=0x4de970 00:17:31.178 [2024-07-24 21:57:36.783809] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:31.178 00:17:31.178 21:57:36 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:31.178 [2024-07-24 21:57:36.815808] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:31.178 [2024-07-24 21:57:36.815857] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88958 ] 00:17:31.440 [2024-07-24 21:57:36.951955] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:31.440 [2024-07-24 21:57:36.952017] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:31.440 [2024-07-24 21:57:36.952024] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:31.440 [2024-07-24 21:57:36.952038] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:31.440 [2024-07-24 21:57:36.952048] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:31.440 [2024-07-24 21:57:36.952207] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:31.440 [2024-07-24 21:57:36.952264] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa2b970 0 00:17:31.440 [2024-07-24 21:57:36.964679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:31.440 [2024-07-24 21:57:36.964703] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:31.440 [2024-07-24 21:57:36.964710] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:31.440 [2024-07-24 21:57:36.964714] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:31.440 [2024-07-24 21:57:36.964763] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.440 [2024-07-24 21:57:36.964770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.440 [2024-07-24 21:57:36.964775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.440 [2024-07-24 21:57:36.964789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:31.440 [2024-07-24 21:57:36.964821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.440 [2024-07-24 21:57:36.972642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.440 [2024-07-24 21:57:36.972678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.440 [2024-07-24 21:57:36.972684] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972689] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.972703] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:31.441 [2024-07-24 21:57:36.972711] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:31.441 [2024-07-24 21:57:36.972718] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:31.441 [2024-07-24 21:57:36.972735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.972754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.972780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.972869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.972877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.972881] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.972891] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:31.441 [2024-07-24 21:57:36.972900] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:31.441 [2024-07-24 21:57:36.972908] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972912] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.972916] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.972924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.972944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.972991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.972998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973002] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.973024] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:31.441 [2024-07-24 21:57:36.973034] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973042] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973047] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.973058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.973078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.973128] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.973135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.973149] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.973185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.973204] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.973247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.973254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.973267] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:31.441 [2024-07-24 21:57:36.973272] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973387] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:31.441 [2024-07-24 21:57:36.973392] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973402] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.973418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.973437] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.973483] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.973490] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973494] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973498] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.973503] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:31.441 [2024-07-24 21:57:36.973514] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.973530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.973548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.973600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.973607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973624] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.441 [2024-07-24 21:57:36.973634] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:31.441 [2024-07-24 21:57:36.973639] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:31.441 [2024-07-24 21:57:36.973649] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:31.441 [2024-07-24 21:57:36.973661] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:31.441 [2024-07-24 21:57:36.973671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.441 [2024-07-24 21:57:36.973684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.441 [2024-07-24 21:57:36.973706] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.441 [2024-07-24 21:57:36.973807] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.441 [2024-07-24 21:57:36.973814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.441 [2024-07-24 21:57:36.973819] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973823] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=4096, cccid=0 00:17:31.441 [2024-07-24 21:57:36.973828] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa641d0) on tqpair(0xa2b970): expected_datao=0, payload_size=4096 00:17:31.441 [2024-07-24 21:57:36.973833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973841] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973846] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.441 [2024-07-24 21:57:36.973855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.441 [2024-07-24 21:57:36.973862] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.441 [2024-07-24 21:57:36.973865] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.973870] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.442 [2024-07-24 21:57:36.973878] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:31.442 [2024-07-24 21:57:36.973884] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:31.442 [2024-07-24 21:57:36.973888] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:31.442 [2024-07-24 21:57:36.973893] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:31.442 [2024-07-24 21:57:36.973898] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:31.442 [2024-07-24 21:57:36.973903] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.973917] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.973926] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.973931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.973935] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.973943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.442 [2024-07-24 21:57:36.973964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.442 [2024-07-24 21:57:36.974019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.442 [2024-07-24 21:57:36.974026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.442 [2024-07-24 21:57:36.974030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa641d0) on tqpair=0xa2b970 00:17:31.442 [2024-07-24 21:57:36.974043] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974047] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.442 [2024-07-24 21:57:36.974064] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974069] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974072] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.442 [2024-07-24 21:57:36.974085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.442 [2024-07-24 21:57:36.974105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.442 [2024-07-24 21:57:36.974125] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974134] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974146] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.442 [2024-07-24 21:57:36.974178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa641d0, cid 0, qid 0 00:17:31.442 [2024-07-24 21:57:36.974186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64330, cid 1, qid 0 00:17:31.442 [2024-07-24 21:57:36.974191] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64490, cid 2, qid 0 00:17:31.442 [2024-07-24 21:57:36.974196] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.442 [2024-07-24 21:57:36.974201] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.442 [2024-07-24 21:57:36.974290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.442 [2024-07-24 21:57:36.974306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.442 [2024-07-24 21:57:36.974311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.442 [2024-07-24 21:57:36.974321] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:31.442 [2024-07-24 21:57:36.974327] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974337] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974345] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.442 [2024-07-24 21:57:36.974388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.442 [2024-07-24 21:57:36.974441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.442 [2024-07-24 21:57:36.974448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.442 [2024-07-24 21:57:36.974452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.442 [2024-07-24 21:57:36.974522] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974540] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974549] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.442 [2024-07-24 21:57:36.974561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.442 [2024-07-24 21:57:36.974582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.442 [2024-07-24 21:57:36.974655] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.442 [2024-07-24 21:57:36.974664] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.442 [2024-07-24 21:57:36.974668] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974672] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=4096, cccid=4 00:17:31.442 [2024-07-24 21:57:36.974677] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64750) on tqpair(0xa2b970): expected_datao=0, payload_size=4096 00:17:31.442 [2024-07-24 21:57:36.974681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974689] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974693] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.442 [2024-07-24 21:57:36.974709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.442 [2024-07-24 21:57:36.974713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.442 [2024-07-24 21:57:36.974717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.442 [2024-07-24 21:57:36.974732] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:31.442 [2024-07-24 21:57:36.974743] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:31.442 [2024-07-24 21:57:36.974754] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.974762] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.974774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.443 [2024-07-24 21:57:36.974796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.443 [2024-07-24 21:57:36.974861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.443 [2024-07-24 21:57:36.974868] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.443 [2024-07-24 21:57:36.974872] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974876] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=4096, cccid=4 00:17:31.443 [2024-07-24 21:57:36.974881] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64750) on tqpair(0xa2b970): expected_datao=0, payload_size=4096 00:17:31.443 [2024-07-24 21:57:36.974885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974893] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974897] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974905] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.974912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.443 [2024-07-24 21:57:36.974916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.443 [2024-07-24 21:57:36.974932] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.974942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.974950] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.974955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.974962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.443 [2024-07-24 21:57:36.974982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.443 [2024-07-24 21:57:36.975044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.443 [2024-07-24 21:57:36.975051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.443 [2024-07-24 21:57:36.975055] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975059] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=4096, cccid=4 00:17:31.443 [2024-07-24 21:57:36.975063] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64750) on tqpair(0xa2b970): expected_datao=0, payload_size=4096 00:17:31.443 [2024-07-24 21:57:36.975068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975075] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975080] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975088] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.975094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.443 [2024-07-24 21:57:36.975098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.443 [2024-07-24 21:57:36.975111] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975121] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975133] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975140] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975145] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975151] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:31.443 [2024-07-24 21:57:36.975156] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:31.443 [2024-07-24 21:57:36.975161] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:31.443 [2024-07-24 21:57:36.975181] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.975194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.443 [2024-07-24 21:57:36.975202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.975216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.443 [2024-07-24 21:57:36.975241] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.443 [2024-07-24 21:57:36.975249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa648b0, cid 5, qid 0 00:17:31.443 [2024-07-24 21:57:36.975309] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.975316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.443 [2024-07-24 21:57:36.975320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.443 [2024-07-24 21:57:36.975331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.975337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.443 [2024-07-24 21:57:36.975341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa648b0) on tqpair=0xa2b970 00:17:31.443 [2024-07-24 21:57:36.975356] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.975368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.443 [2024-07-24 21:57:36.975386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa648b0, cid 5, qid 0 00:17:31.443 [2024-07-24 21:57:36.975429] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.975436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.443 [2024-07-24 21:57:36.975440] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975444] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa648b0) on tqpair=0xa2b970 00:17:31.443 [2024-07-24 21:57:36.975455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.443 [2024-07-24 21:57:36.975459] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2b970) 00:17:31.443 [2024-07-24 21:57:36.975466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.443 [2024-07-24 21:57:36.975483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa648b0, cid 5, qid 0 00:17:31.443 [2024-07-24 21:57:36.975530] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.443 [2024-07-24 21:57:36.975537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.444 [2024-07-24 21:57:36.975541] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa648b0) on tqpair=0xa2b970 00:17:31.444 [2024-07-24 21:57:36.975556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2b970) 00:17:31.444 [2024-07-24 21:57:36.975567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.444 [2024-07-24 21:57:36.975585] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa648b0, cid 5, qid 0 00:17:31.444 [2024-07-24 21:57:36.975646] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.444 [2024-07-24 21:57:36.975655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.444 [2024-07-24 21:57:36.975659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa648b0) on tqpair=0xa2b970 00:17:31.444 [2024-07-24 21:57:36.975677] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975682] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa2b970) 00:17:31.444 [2024-07-24 21:57:36.975690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.444 [2024-07-24 21:57:36.975698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975702] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa2b970) 00:17:31.444 [2024-07-24 21:57:36.975709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.444 [2024-07-24 21:57:36.975716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa2b970) 00:17:31.444 [2024-07-24 21:57:36.975727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.444 [2024-07-24 21:57:36.975735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975739] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa2b970) 00:17:31.444 [2024-07-24 21:57:36.975745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.444 [2024-07-24 21:57:36.975768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa648b0, cid 5, qid 0 00:17:31.444 [2024-07-24 21:57:36.975775] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64750, cid 4, qid 0 00:17:31.444 [2024-07-24 21:57:36.975780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64a10, cid 6, qid 0 00:17:31.444 [2024-07-24 21:57:36.975785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64b70, cid 7, qid 0 00:17:31.444 [2024-07-24 21:57:36.975910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.444 [2024-07-24 21:57:36.975926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.444 [2024-07-24 21:57:36.975931] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975935] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=8192, cccid=5 00:17:31.444 [2024-07-24 21:57:36.975940] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa648b0) on tqpair(0xa2b970): expected_datao=0, payload_size=8192 00:17:31.444 [2024-07-24 21:57:36.975945] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975963] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975968] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.444 [2024-07-24 21:57:36.975980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.444 [2024-07-24 21:57:36.975984] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.975988] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=512, cccid=4 00:17:31.444 [2024-07-24 21:57:36.975993] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64750) on tqpair(0xa2b970): expected_datao=0, payload_size=512 00:17:31.444 [2024-07-24 21:57:36.975998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976004] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976008] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.444 [2024-07-24 21:57:36.976019] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.444 [2024-07-24 21:57:36.976023] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976027] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=512, cccid=6 00:17:31.444 [2024-07-24 21:57:36.976031] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64a10) on tqpair(0xa2b970): expected_datao=0, payload_size=512 00:17:31.444 [2024-07-24 21:57:36.976036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976042] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976046] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:31.444 [2024-07-24 21:57:36.976058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:31.444 [2024-07-24 21:57:36.976061] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976065] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa2b970): datao=0, datal=4096, cccid=7 00:17:31.444 [2024-07-24 21:57:36.976070] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa64b70) on tqpair(0xa2b970): expected_datao=0, payload_size=4096 00:17:31.444 [2024-07-24 21:57:36.976074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976081] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976085] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.444 [2024-07-24 21:57:36.976096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.444 [2024-07-24 21:57:36.976100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.444 [2024-07-24 21:57:36.976104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa648b0) on tqpair=0xa2b970 00:17:31.444 ===================================================== 00:17:31.444 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.444 ===================================================== 00:17:31.444 Controller Capabilities/Features 00:17:31.444 ================================ 00:17:31.444 Vendor ID: 8086 00:17:31.444 Subsystem Vendor ID: 8086 00:17:31.444 Serial Number: SPDK00000000000001 00:17:31.444 Model Number: SPDK bdev Controller 00:17:31.444 Firmware Version: 24.05.1 00:17:31.444 Recommended Arb Burst: 6 00:17:31.444 IEEE OUI Identifier: e4 d2 5c 00:17:31.444 Multi-path I/O 00:17:31.444 May have multiple subsystem ports: Yes 00:17:31.444 May have multiple controllers: Yes 00:17:31.444 Associated with SR-IOV VF: No 00:17:31.444 Max Data Transfer Size: 131072 00:17:31.444 Max Number of Namespaces: 32 00:17:31.444 Max Number of I/O Queues: 127 00:17:31.444 NVMe Specification Version (VS): 1.3 00:17:31.444 NVMe Specification Version (Identify): 1.3 00:17:31.444 Maximum Queue Entries: 128 00:17:31.445 Contiguous Queues Required: Yes 00:17:31.445 Arbitration Mechanisms Supported 00:17:31.445 Weighted Round Robin: Not Supported 00:17:31.445 Vendor Specific: Not Supported 00:17:31.445 Reset Timeout: 15000 ms 00:17:31.445 Doorbell Stride: 4 bytes 00:17:31.445 NVM Subsystem Reset: Not Supported 00:17:31.445 Command Sets Supported 00:17:31.445 NVM Command Set: Supported 00:17:31.445 Boot Partition: Not Supported 00:17:31.445 Memory Page Size Minimum: 4096 bytes 00:17:31.445 Memory Page Size Maximum: 4096 bytes 00:17:31.445 Persistent Memory Region: Not Supported 00:17:31.445 Optional Asynchronous Events Supported 00:17:31.445 Namespace Attribute Notices: Supported 00:17:31.445 Firmware Activation Notices: Not Supported 00:17:31.445 ANA Change Notices: Not Supported 00:17:31.445 PLE Aggregate Log Change Notices: Not Supported 00:17:31.445 LBA Status Info Alert Notices: Not Supported 00:17:31.445 EGE Aggregate Log Change Notices: Not Supported 00:17:31.445 Normal NVM Subsystem Shutdown event: Not Supported 00:17:31.445 Zone Descriptor Change Notices: Not Supported 00:17:31.445 Discovery Log Change Notices: Not Supported 00:17:31.445 Controller Attributes 00:17:31.445 128-bit Host Identifier: Supported 00:17:31.445 Non-Operational Permissive Mode: Not Supported 00:17:31.445 NVM Sets: Not Supported 00:17:31.445 Read Recovery Levels: Not Supported 00:17:31.445 Endurance Groups: Not Supported 00:17:31.445 Predictable Latency Mode: Not Supported 00:17:31.445 Traffic Based Keep ALive: Not Supported 00:17:31.445 Namespace Granularity: Not Supported 00:17:31.445 SQ Associations: Not Supported 00:17:31.445 UUID List: Not Supported 00:17:31.445 Multi-Domain Subsystem: Not Supported 00:17:31.445 Fixed Capacity Management: Not Supported 00:17:31.445 Variable Capacity Management: Not Supported 00:17:31.445 Delete Endurance Group: Not Supported 00:17:31.445 Delete NVM Set: Not Supported 00:17:31.445 Extended LBA Formats Supported: Not Supported 00:17:31.445 Flexible Data Placement Supported: Not Supported 00:17:31.445 00:17:31.445 Controller Memory Buffer Support 00:17:31.445 ================================ 00:17:31.445 Supported: No 00:17:31.445 00:17:31.445 Persistent Memory Region Support 00:17:31.445 ================================ 00:17:31.445 Supported: No 00:17:31.445 00:17:31.445 Admin Command Set Attributes 00:17:31.445 ============================ 00:17:31.445 Security Send/Receive: Not Supported 00:17:31.445 Format NVM: Not Supported 00:17:31.445 Firmware Activate/Download: Not Supported 00:17:31.445 Namespace Management: Not Supported 00:17:31.445 Device Self-Test: Not Supported 00:17:31.445 Directives: Not Supported 00:17:31.445 NVMe-MI: Not Supported 00:17:31.445 Virtualization Management: Not Supported 00:17:31.445 Doorbell Buffer Config: Not Supported 00:17:31.445 Get LBA Status Capability: Not Supported 00:17:31.445 Command & Feature Lockdown Capability: Not Supported 00:17:31.445 Abort Command Limit: 4 00:17:31.445 Async Event Request Limit: 4 00:17:31.445 Number of Firmware Slots: N/A 00:17:31.445 Firmware Slot 1 Read-Only: N/A 00:17:31.445 Firmware Activation Without Reset: [2024-07-24 21:57:36.976121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.445 [2024-07-24 21:57:36.976128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.445 [2024-07-24 21:57:36.976132] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.445 [2024-07-24 21:57:36.976136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64750) on tqpair=0xa2b970 00:17:31.445 [2024-07-24 21:57:36.976146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.445 [2024-07-24 21:57:36.976152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.445 [2024-07-24 21:57:36.976156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.445 [2024-07-24 21:57:36.976160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64a10) on tqpair=0xa2b970 00:17:31.445 [2024-07-24 21:57:36.976173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.445 [2024-07-24 21:57:36.976179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.445 [2024-07-24 21:57:36.976183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.445 [2024-07-24 21:57:36.976187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64b70) on tqpair=0xa2b970 00:17:31.445 N/A 00:17:31.445 Multiple Update Detection Support: N/A 00:17:31.445 Firmware Update Granularity: No Information Provided 00:17:31.445 Per-Namespace SMART Log: No 00:17:31.445 Asymmetric Namespace Access Log Page: Not Supported 00:17:31.445 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:31.445 Command Effects Log Page: Supported 00:17:31.445 Get Log Page Extended Data: Supported 00:17:31.445 Telemetry Log Pages: Not Supported 00:17:31.445 Persistent Event Log Pages: Not Supported 00:17:31.445 Supported Log Pages Log Page: May Support 00:17:31.445 Commands Supported & Effects Log Page: Not Supported 00:17:31.445 Feature Identifiers & Effects Log Page:May Support 00:17:31.445 NVMe-MI Commands & Effects Log Page: May Support 00:17:31.445 Data Area 4 for Telemetry Log: Not Supported 00:17:31.445 Error Log Page Entries Supported: 128 00:17:31.445 Keep Alive: Supported 00:17:31.445 Keep Alive Granularity: 10000 ms 00:17:31.445 00:17:31.445 NVM Command Set Attributes 00:17:31.445 ========================== 00:17:31.445 Submission Queue Entry Size 00:17:31.445 Max: 64 00:17:31.445 Min: 64 00:17:31.445 Completion Queue Entry Size 00:17:31.445 Max: 16 00:17:31.445 Min: 16 00:17:31.446 Number of Namespaces: 32 00:17:31.446 Compare Command: Supported 00:17:31.446 Write Uncorrectable Command: Not Supported 00:17:31.446 Dataset Management Command: Supported 00:17:31.446 Write Zeroes Command: Supported 00:17:31.446 Set Features Save Field: Not Supported 00:17:31.446 Reservations: Supported 00:17:31.446 Timestamp: Not Supported 00:17:31.446 Copy: Supported 00:17:31.446 Volatile Write Cache: Present 00:17:31.446 Atomic Write Unit (Normal): 1 00:17:31.446 Atomic Write Unit (PFail): 1 00:17:31.446 Atomic Compare & Write Unit: 1 00:17:31.446 Fused Compare & Write: Supported 00:17:31.446 Scatter-Gather List 00:17:31.446 SGL Command Set: Supported 00:17:31.446 SGL Keyed: Supported 00:17:31.446 SGL Bit Bucket Descriptor: Not Supported 00:17:31.446 SGL Metadata Pointer: Not Supported 00:17:31.446 Oversized SGL: Not Supported 00:17:31.446 SGL Metadata Address: Not Supported 00:17:31.446 SGL Offset: Supported 00:17:31.446 Transport SGL Data Block: Not Supported 00:17:31.446 Replay Protected Memory Block: Not Supported 00:17:31.446 00:17:31.446 Firmware Slot Information 00:17:31.446 ========================= 00:17:31.446 Active slot: 1 00:17:31.446 Slot 1 Firmware Revision: 24.05.1 00:17:31.446 00:17:31.446 00:17:31.446 Commands Supported and Effects 00:17:31.446 ============================== 00:17:31.446 Admin Commands 00:17:31.446 -------------- 00:17:31.446 Get Log Page (02h): Supported 00:17:31.446 Identify (06h): Supported 00:17:31.446 Abort (08h): Supported 00:17:31.446 Set Features (09h): Supported 00:17:31.446 Get Features (0Ah): Supported 00:17:31.446 Asynchronous Event Request (0Ch): Supported 00:17:31.446 Keep Alive (18h): Supported 00:17:31.446 I/O Commands 00:17:31.446 ------------ 00:17:31.446 Flush (00h): Supported LBA-Change 00:17:31.446 Write (01h): Supported LBA-Change 00:17:31.446 Read (02h): Supported 00:17:31.446 Compare (05h): Supported 00:17:31.446 Write Zeroes (08h): Supported LBA-Change 00:17:31.446 Dataset Management (09h): Supported LBA-Change 00:17:31.446 Copy (19h): Supported LBA-Change 00:17:31.446 Unknown (79h): Supported LBA-Change 00:17:31.446 Unknown (7Ah): Supported 00:17:31.446 00:17:31.446 Error Log 00:17:31.446 ========= 00:17:31.446 00:17:31.446 Arbitration 00:17:31.446 =========== 00:17:31.446 Arbitration Burst: 1 00:17:31.446 00:17:31.446 Power Management 00:17:31.446 ================ 00:17:31.446 Number of Power States: 1 00:17:31.446 Current Power State: Power State #0 00:17:31.446 Power State #0: 00:17:31.446 Max Power: 0.00 W 00:17:31.446 Non-Operational State: Operational 00:17:31.446 Entry Latency: Not Reported 00:17:31.446 Exit Latency: Not Reported 00:17:31.446 Relative Read Throughput: 0 00:17:31.446 Relative Read Latency: 0 00:17:31.446 Relative Write Throughput: 0 00:17:31.446 Relative Write Latency: 0 00:17:31.446 Idle Power: Not Reported 00:17:31.446 Active Power: Not Reported 00:17:31.446 Non-Operational Permissive Mode: Not Supported 00:17:31.446 00:17:31.446 Health Information 00:17:31.446 ================== 00:17:31.446 Critical Warnings: 00:17:31.446 Available Spare Space: OK 00:17:31.446 Temperature: OK 00:17:31.446 Device Reliability: OK 00:17:31.446 Read Only: No 00:17:31.446 Volatile Memory Backup: OK 00:17:31.446 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:31.446 Temperature Threshold: [2024-07-24 21:57:36.976297] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.446 [2024-07-24 21:57:36.976305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa2b970) 00:17:31.446 [2024-07-24 21:57:36.976314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.446 [2024-07-24 21:57:36.976338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa64b70, cid 7, qid 0 00:17:31.446 [2024-07-24 21:57:36.976387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.446 [2024-07-24 21:57:36.976394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.446 [2024-07-24 21:57:36.976399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.446 [2024-07-24 21:57:36.976403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa64b70) on tqpair=0xa2b970 00:17:31.446 [2024-07-24 21:57:36.976438] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:31.446 [2024-07-24 21:57:36.976452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.446 [2024-07-24 21:57:36.976459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.446 [2024-07-24 21:57:36.976466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.446 [2024-07-24 21:57:36.976472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.446 [2024-07-24 21:57:36.976482] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.446 [2024-07-24 21:57:36.976486] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.446 [2024-07-24 21:57:36.976490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.446 [2024-07-24 21:57:36.976498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.446 [2024-07-24 21:57:36.976520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.446 [2024-07-24 21:57:36.976572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.976584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.976589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.976594] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.976602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.976607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.980645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.980678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.980748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.980756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.980760] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.980770] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:31.447 [2024-07-24 21:57:36.980775] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:31.447 [2024-07-24 21:57:36.980787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980792] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.980804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.980822] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.980879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.980886] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.980890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980894] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.980905] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980910] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.980914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.980922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.980940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.980985] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.980992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.980996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981000] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.981022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.981040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.981059] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.981111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.981118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.981122] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981126] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.981136] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981141] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981145] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.981153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.981170] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.981218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.981225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.981229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.981244] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981248] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981252] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.981260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.981277] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.981325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.981337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.981342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.981357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.447 [2024-07-24 21:57:36.981374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.447 [2024-07-24 21:57:36.981392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.447 [2024-07-24 21:57:36.981441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.447 [2024-07-24 21:57:36.981447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.447 [2024-07-24 21:57:36.981451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.447 [2024-07-24 21:57:36.981466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.447 [2024-07-24 21:57:36.981475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.981482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.981499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.981542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.981549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.981553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.981567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981572] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.981583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.981601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.981664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.981673] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.981677] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.981692] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.981708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.981729] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.981775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.981782] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.981786] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.981801] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.981817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.981835] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.981881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.981888] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.981891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.981906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.981915] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.981922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.981940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.981988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.981995] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.981999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.982013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.982030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.982047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.982095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.982102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.982106] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.982120] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982125] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.982136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.982154] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.982201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.982208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.982212] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.982227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.982243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.982260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.982305] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.982312] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.982316] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982320] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.448 [2024-07-24 21:57:36.982330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982339] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.448 [2024-07-24 21:57:36.982346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.448 [2024-07-24 21:57:36.982364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.448 [2024-07-24 21:57:36.982410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.448 [2024-07-24 21:57:36.982417] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.448 [2024-07-24 21:57:36.982421] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.448 [2024-07-24 21:57:36.982425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.982435] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.982452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.982470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.982519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.982536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.982541] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982546] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.982557] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.982574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.982593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.982654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.982663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.982667] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982672] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.982683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982688] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.982700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.982720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.982768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.982775] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.982779] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.982794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.982810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.982828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.982875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.982883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.982886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982890] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.982901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.982917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.982935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.982978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.982985] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.982988] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.982993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.983003] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983008] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.983019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.983037] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.983081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.983093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.983098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983102] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.983113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.983130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.983148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.983193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.983200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.983204] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.983219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.983235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.983252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.449 [2024-07-24 21:57:36.983300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.449 [2024-07-24 21:57:36.983308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.449 [2024-07-24 21:57:36.983312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.449 [2024-07-24 21:57:36.983326] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983331] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.449 [2024-07-24 21:57:36.983335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.449 [2024-07-24 21:57:36.983342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.449 [2024-07-24 21:57:36.983360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983417] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983422] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.983437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.983453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.983472] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983520] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.983551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.983568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.983586] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.983673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983682] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.983689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.983709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.983784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.983801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.983818] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983875] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.983889] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.983905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.983923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.983968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.983981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.983985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.983989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.984000] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984009] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.984016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.984034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.984082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.984090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.984094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984098] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.984109] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.984125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.984143] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.984186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.984192] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.984196] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.984211] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.450 [2024-07-24 21:57:36.984227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.450 [2024-07-24 21:57:36.984245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.450 [2024-07-24 21:57:36.984292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.450 [2024-07-24 21:57:36.984299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.450 [2024-07-24 21:57:36.984303] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.450 [2024-07-24 21:57:36.984307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.450 [2024-07-24 21:57:36.984318] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.451 [2024-07-24 21:57:36.984334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.451 [2024-07-24 21:57:36.984352] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.451 [2024-07-24 21:57:36.984400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.451 [2024-07-24 21:57:36.984406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.451 [2024-07-24 21:57:36.984410] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.451 [2024-07-24 21:57:36.984425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.451 [2024-07-24 21:57:36.984441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.451 [2024-07-24 21:57:36.984459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.451 [2024-07-24 21:57:36.984504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.451 [2024-07-24 21:57:36.984511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.451 [2024-07-24 21:57:36.984515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.451 [2024-07-24 21:57:36.984530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.984538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.451 [2024-07-24 21:57:36.984546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.451 [2024-07-24 21:57:36.984563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.451 [2024-07-24 21:57:36.988622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.451 [2024-07-24 21:57:36.988659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.451 [2024-07-24 21:57:36.988665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.988670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.451 [2024-07-24 21:57:36.988686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.988692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.988696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa2b970) 00:17:31.451 [2024-07-24 21:57:36.988705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.451 [2024-07-24 21:57:36.988733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa645f0, cid 3, qid 0 00:17:31.451 [2024-07-24 21:57:36.988794] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:31.451 [2024-07-24 21:57:36.988802] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:31.451 [2024-07-24 21:57:36.988806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:31.451 [2024-07-24 21:57:36.988810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa645f0) on tqpair=0xa2b970 00:17:31.451 [2024-07-24 21:57:36.988818] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:17:31.451 0 Kelvin (-273 Celsius) 00:17:31.451 Available Spare: 0% 00:17:31.451 Available Spare Threshold: 0% 00:17:31.451 Life Percentage Used: 0% 00:17:31.451 Data Units Read: 0 00:17:31.451 Data Units Written: 0 00:17:31.451 Host Read Commands: 0 00:17:31.451 Host Write Commands: 0 00:17:31.451 Controller Busy Time: 0 minutes 00:17:31.451 Power Cycles: 0 00:17:31.451 Power On Hours: 0 hours 00:17:31.451 Unsafe Shutdowns: 0 00:17:31.451 Unrecoverable Media Errors: 0 00:17:31.451 Lifetime Error Log Entries: 0 00:17:31.451 Warning Temperature Time: 0 minutes 00:17:31.451 Critical Temperature Time: 0 minutes 00:17:31.451 00:17:31.451 Number of Queues 00:17:31.451 ================ 00:17:31.451 Number of I/O Submission Queues: 127 00:17:31.451 Number of I/O Completion Queues: 127 00:17:31.451 00:17:31.451 Active Namespaces 00:17:31.451 ================= 00:17:31.451 Namespace ID:1 00:17:31.451 Error Recovery Timeout: Unlimited 00:17:31.451 Command Set Identifier: NVM (00h) 00:17:31.451 Deallocate: Supported 00:17:31.451 Deallocated/Unwritten Error: Not Supported 00:17:31.451 Deallocated Read Value: Unknown 00:17:31.451 Deallocate in Write Zeroes: Not Supported 00:17:31.451 Deallocated Guard Field: 0xFFFF 00:17:31.451 Flush: Supported 00:17:31.451 Reservation: Supported 00:17:31.451 Namespace Sharing Capabilities: Multiple Controllers 00:17:31.451 Size (in LBAs): 131072 (0GiB) 00:17:31.451 Capacity (in LBAs): 131072 (0GiB) 00:17:31.451 Utilization (in LBAs): 131072 (0GiB) 00:17:31.451 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:31.451 EUI64: ABCDEF0123456789 00:17:31.451 UUID: affc542a-eba8-4edc-95af-0b05f670409d 00:17:31.451 Thin Provisioning: Not Supported 00:17:31.451 Per-NS Atomic Units: Yes 00:17:31.451 Atomic Boundary Size (Normal): 0 00:17:31.451 Atomic Boundary Size (PFail): 0 00:17:31.451 Atomic Boundary Offset: 0 00:17:31.451 Maximum Single Source Range Length: 65535 00:17:31.451 Maximum Copy Length: 65535 00:17:31.451 Maximum Source Range Count: 1 00:17:31.451 NGUID/EUI64 Never Reused: No 00:17:31.451 Namespace Write Protected: No 00:17:31.451 Number of LBA Formats: 1 00:17:31.451 Current LBA Format: LBA Format #00 00:17:31.451 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:31.451 00:17:31.451 21:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:31.451 21:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.451 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.451 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.451 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.452 rmmod nvme_tcp 00:17:31.452 rmmod nvme_fabrics 00:17:31.452 rmmod nvme_keyring 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 88917 ']' 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 88917 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 88917 ']' 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 88917 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88917 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88917' 00:17:31.452 killing process with pid 88917 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 88917 00:17:31.452 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 88917 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:31.711 ************************************ 00:17:31.711 END TEST nvmf_identify 00:17:31.711 00:17:31.711 real 0m2.383s 00:17:31.711 user 0m6.629s 00:17:31.711 sys 0m0.603s 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:31.711 21:57:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:31.711 ************************************ 00:17:31.970 21:57:37 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:31.970 21:57:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:31.970 21:57:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:31.970 21:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.970 ************************************ 00:17:31.970 START TEST nvmf_perf 00:17:31.970 ************************************ 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:31.970 * Looking for test storage... 00:17:31.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.970 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:31.971 Cannot find device "nvmf_tgt_br" 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.971 Cannot find device "nvmf_tgt_br2" 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:31.971 Cannot find device "nvmf_tgt_br" 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:31.971 Cannot find device "nvmf_tgt_br2" 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:31.971 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:32.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:32.231 00:17:32.231 --- 10.0.0.2 ping statistics --- 00:17:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.231 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:32.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:32.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:32.231 00:17:32.231 --- 10.0.0.3 ping statistics --- 00:17:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.231 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:32.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:32.231 00:17:32.231 --- 10.0.0.1 ping statistics --- 00:17:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.231 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:32.231 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=89123 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 89123 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 89123 ']' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.232 21:57:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:32.491 [2024-07-24 21:57:37.968842] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:32.491 [2024-07-24 21:57:37.968947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.491 [2024-07-24 21:57:38.109137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.491 [2024-07-24 21:57:38.204431] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.491 [2024-07-24 21:57:38.204477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.491 [2024-07-24 21:57:38.204489] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.491 [2024-07-24 21:57:38.204498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.491 [2024-07-24 21:57:38.204506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.491 [2024-07-24 21:57:38.204671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.491 [2024-07-24 21:57:38.204807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.491 [2024-07-24 21:57:38.205155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.491 [2024-07-24 21:57:38.205161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.749 [2024-07-24 21:57:38.259647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:33.317 21:57:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:33.884 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:33.884 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:34.142 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:34.142 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:34.400 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:34.400 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:34.400 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:34.400 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:34.400 21:57:39 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.658 [2024-07-24 21:57:40.197080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.659 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.917 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:34.917 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.175 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:35.175 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:35.433 21:57:40 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.433 [2024-07-24 21:57:41.110132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.434 21:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:35.692 21:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:35.692 21:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:35.692 21:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:35.692 21:57:41 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:37.067 Initializing NVMe Controllers 00:17:37.067 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:37.067 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:37.067 Initialization complete. Launching workers. 00:17:37.067 ======================================================== 00:17:37.067 Latency(us) 00:17:37.067 Device Information : IOPS MiB/s Average min max 00:17:37.067 PCIE (0000:00:10.0) NSID 1 from core 0: 23968.00 93.62 1334.98 285.94 8182.87 00:17:37.067 ======================================================== 00:17:37.067 Total : 23968.00 93.62 1334.98 285.94 8182.87 00:17:37.067 00:17:37.067 21:57:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:38.001 Initializing NVMe Controllers 00:17:38.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:38.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:38.001 Initialization complete. Launching workers. 00:17:38.001 ======================================================== 00:17:38.001 Latency(us) 00:17:38.001 Device Information : IOPS MiB/s Average min max 00:17:38.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3792.00 14.81 263.44 99.40 5160.79 00:17:38.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 127.00 0.50 7923.32 6018.92 12021.87 00:17:38.001 ======================================================== 00:17:38.001 Total : 3919.00 15.31 511.67 99.40 12021.87 00:17:38.001 00:17:38.258 21:57:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:39.633 Initializing NVMe Controllers 00:17:39.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:39.633 Initialization complete. Launching workers. 00:17:39.633 ======================================================== 00:17:39.633 Latency(us) 00:17:39.633 Device Information : IOPS MiB/s Average min max 00:17:39.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8795.86 34.36 3640.67 606.64 10663.48 00:17:39.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3905.05 15.25 8257.89 5238.82 15578.67 00:17:39.633 ======================================================== 00:17:39.633 Total : 12700.91 49.61 5060.29 606.64 15578.67 00:17:39.633 00:17:39.633 21:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:39.633 21:57:45 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:42.164 Initializing NVMe Controllers 00:17:42.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.164 Controller IO queue size 128, less than required. 00:17:42.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:42.164 Controller IO queue size 128, less than required. 00:17:42.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:42.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:42.164 Initialization complete. Launching workers. 00:17:42.164 ======================================================== 00:17:42.164 Latency(us) 00:17:42.164 Device Information : IOPS MiB/s Average min max 00:17:42.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.77 421.94 77686.24 38269.32 121320.93 00:17:42.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 640.96 160.24 204606.68 70884.71 332592.31 00:17:42.164 ======================================================== 00:17:42.164 Total : 2328.74 582.18 112619.97 38269.32 332592.31 00:17:42.164 00:17:42.164 21:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:42.164 Initializing NVMe Controllers 00:17:42.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.165 Controller IO queue size 128, less than required. 00:17:42.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:42.165 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:42.165 Controller IO queue size 128, less than required. 00:17:42.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:42.165 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:42.165 WARNING: Some requested NVMe devices were skipped 00:17:42.165 No valid NVMe controllers or AIO or URING devices found 00:17:42.165 21:57:47 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:44.696 Initializing NVMe Controllers 00:17:44.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.696 Controller IO queue size 128, less than required. 00:17:44.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.696 Controller IO queue size 128, less than required. 00:17:44.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:44.696 Initialization complete. Launching workers. 00:17:44.696 00:17:44.696 ==================== 00:17:44.696 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:44.696 TCP transport: 00:17:44.696 polls: 8521 00:17:44.696 idle_polls: 4662 00:17:44.696 sock_completions: 3859 00:17:44.696 nvme_completions: 6673 00:17:44.696 submitted_requests: 9976 00:17:44.696 queued_requests: 1 00:17:44.696 00:17:44.696 ==================== 00:17:44.696 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:44.696 TCP transport: 00:17:44.696 polls: 8976 00:17:44.696 idle_polls: 4964 00:17:44.696 sock_completions: 4012 00:17:44.696 nvme_completions: 6779 00:17:44.696 submitted_requests: 10174 00:17:44.696 queued_requests: 1 00:17:44.696 ======================================================== 00:17:44.696 Latency(us) 00:17:44.696 Device Information : IOPS MiB/s Average min max 00:17:44.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1667.87 416.97 77463.02 40370.39 117127.77 00:17:44.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1694.37 423.59 76554.75 39433.88 126273.29 00:17:44.696 ======================================================== 00:17:44.696 Total : 3362.24 840.56 77005.30 39433.88 126273.29 00:17:44.696 00:17:44.696 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:44.696 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.954 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:44.954 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:44.954 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=42c03587-0d11-4c46-8bc7-b9da59df3205 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 42c03587-0d11-4c46-8bc7-b9da59df3205 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=42c03587-0d11-4c46-8bc7-b9da59df3205 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:17:45.213 21:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:17:45.472 { 00:17:45.472 "uuid": "42c03587-0d11-4c46-8bc7-b9da59df3205", 00:17:45.472 "name": "lvs_0", 00:17:45.472 "base_bdev": "Nvme0n1", 00:17:45.472 "total_data_clusters": 1278, 00:17:45.472 "free_clusters": 1278, 00:17:45.472 "block_size": 4096, 00:17:45.472 "cluster_size": 4194304 00:17:45.472 } 00:17:45.472 ]' 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="42c03587-0d11-4c46-8bc7-b9da59df3205") .free_clusters' 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="42c03587-0d11-4c46-8bc7-b9da59df3205") .cluster_size' 00:17:45.472 5112 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:45.472 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 42c03587-0d11-4c46-8bc7-b9da59df3205 lbd_0 5112 00:17:46.039 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=429d1906-af1e-482f-a8bd-96100a537ade 00:17:46.039 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 429d1906-af1e-482f-a8bd-96100a537ade lvs_n_0 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=766c58e1-43ef-4b8a-8ae0-4b839fe294fd 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 766c58e1-43ef-4b8a-8ae0-4b839fe294fd 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=766c58e1-43ef-4b8a-8ae0-4b839fe294fd 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:17:46.297 21:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:17:46.555 { 00:17:46.555 "uuid": "42c03587-0d11-4c46-8bc7-b9da59df3205", 00:17:46.555 "name": "lvs_0", 00:17:46.555 "base_bdev": "Nvme0n1", 00:17:46.555 "total_data_clusters": 1278, 00:17:46.555 "free_clusters": 0, 00:17:46.555 "block_size": 4096, 00:17:46.555 "cluster_size": 4194304 00:17:46.555 }, 00:17:46.555 { 00:17:46.555 "uuid": "766c58e1-43ef-4b8a-8ae0-4b839fe294fd", 00:17:46.555 "name": "lvs_n_0", 00:17:46.555 "base_bdev": "429d1906-af1e-482f-a8bd-96100a537ade", 00:17:46.555 "total_data_clusters": 1276, 00:17:46.555 "free_clusters": 1276, 00:17:46.555 "block_size": 4096, 00:17:46.555 "cluster_size": 4194304 00:17:46.555 } 00:17:46.555 ]' 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="766c58e1-43ef-4b8a-8ae0-4b839fe294fd") .free_clusters' 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="766c58e1-43ef-4b8a-8ae0-4b839fe294fd") .cluster_size' 00:17:46.555 5104 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:46.555 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 766c58e1-43ef-4b8a-8ae0-4b839fe294fd lbd_nest_0 5104 00:17:46.814 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e51d4340-68c1-4c50-b2d7-9b51d2f93283 00:17:46.814 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.072 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:47.072 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e51d4340-68c1-4c50-b2d7-9b51d2f93283 00:17:47.330 21:57:52 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.588 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:47.588 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:47.588 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:47.588 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:47.588 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:47.846 Initializing NVMe Controllers 00:17:47.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.846 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:47.846 WARNING: Some requested NVMe devices were skipped 00:17:47.846 No valid NVMe controllers or AIO or URING devices found 00:17:47.846 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:47.846 21:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:00.056 Initializing NVMe Controllers 00:18:00.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.056 Initialization complete. Launching workers. 00:18:00.056 ======================================================== 00:18:00.056 Latency(us) 00:18:00.057 Device Information : IOPS MiB/s Average min max 00:18:00.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 978.70 122.34 1021.01 337.01 8559.94 00:18:00.057 ======================================================== 00:18:00.057 Total : 978.70 122.34 1021.01 337.01 8559.94 00:18:00.057 00:18:00.057 21:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:00.057 21:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:00.057 21:58:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:00.057 Initializing NVMe Controllers 00:18:00.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.057 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:00.057 WARNING: Some requested NVMe devices were skipped 00:18:00.057 No valid NVMe controllers or AIO or URING devices found 00:18:00.057 21:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:00.057 21:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:10.028 Initializing NVMe Controllers 00:18:10.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.028 Initialization complete. Launching workers. 00:18:10.028 ======================================================== 00:18:10.028 Latency(us) 00:18:10.028 Device Information : IOPS MiB/s Average min max 00:18:10.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1340.10 167.51 23913.97 5292.89 62869.36 00:18:10.028 ======================================================== 00:18:10.028 Total : 1340.10 167.51 23913.97 5292.89 62869.36 00:18:10.028 00:18:10.028 21:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:10.028 21:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:10.028 21:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:10.028 Initializing NVMe Controllers 00:18:10.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.028 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:10.028 WARNING: Some requested NVMe devices were skipped 00:18:10.028 No valid NVMe controllers or AIO or URING devices found 00:18:10.028 21:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:10.028 21:58:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:19.998 Initializing NVMe Controllers 00:18:19.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:19.998 Controller IO queue size 128, less than required. 00:18:19.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:19.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:19.998 Initialization complete. Launching workers. 00:18:19.998 ======================================================== 00:18:19.998 Latency(us) 00:18:19.998 Device Information : IOPS MiB/s Average min max 00:18:19.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4060.54 507.57 31558.07 12840.58 71613.40 00:18:19.998 ======================================================== 00:18:19.998 Total : 4060.54 507.57 31558.07 12840.58 71613.40 00:18:19.998 00:18:19.998 21:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.998 21:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e51d4340-68c1-4c50-b2d7-9b51d2f93283 00:18:19.998 21:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:20.256 21:58:25 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 429d1906-af1e-482f-a8bd-96100a537ade 00:18:20.513 21:58:26 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.770 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.770 rmmod nvme_tcp 00:18:20.771 rmmod nvme_fabrics 00:18:20.771 rmmod nvme_keyring 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 89123 ']' 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 89123 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 89123 ']' 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 89123 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89123 00:18:20.771 killing process with pid 89123 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89123' 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 89123 00:18:20.771 21:58:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 89123 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:21.704 00:18:21.704 real 0m49.891s 00:18:21.704 user 3m8.085s 00:18:21.704 sys 0m12.487s 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:21.704 ************************************ 00:18:21.704 21:58:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:21.704 END TEST nvmf_perf 00:18:21.704 ************************************ 00:18:21.704 21:58:27 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:21.704 21:58:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:21.704 21:58:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:21.704 21:58:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.704 ************************************ 00:18:21.704 START TEST nvmf_fio_host 00:18:21.704 ************************************ 00:18:21.704 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:21.962 * Looking for test storage... 00:18:21.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:21.962 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:21.963 Cannot find device "nvmf_tgt_br" 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.963 Cannot find device "nvmf_tgt_br2" 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:21.963 Cannot find device "nvmf_tgt_br" 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:21.963 Cannot find device "nvmf_tgt_br2" 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:21.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:21.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:21.963 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:22.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:18:22.253 00:18:22.253 --- 10.0.0.2 ping statistics --- 00:18:22.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.253 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:22.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:22.253 00:18:22.253 --- 10.0.0.3 ping statistics --- 00:18:22.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.253 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:22.253 00:18:22.253 --- 10.0.0.1 ping statistics --- 00:18:22.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.253 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.253 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89925 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89925 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 89925 ']' 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.254 21:58:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.254 [2024-07-24 21:58:27.949939] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:22.254 [2024-07-24 21:58:27.950058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.511 [2024-07-24 21:58:28.093195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.511 [2024-07-24 21:58:28.189110] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.511 [2024-07-24 21:58:28.189166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.511 [2024-07-24 21:58:28.189180] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.511 [2024-07-24 21:58:28.189191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.511 [2024-07-24 21:58:28.189200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.511 [2024-07-24 21:58:28.189524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.511 [2024-07-24 21:58:28.189882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.511 [2024-07-24 21:58:28.189971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.511 [2024-07-24 21:58:28.189975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.768 [2024-07-24 21:58:28.247732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:23.333 21:58:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.333 21:58:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:18:23.333 21:58:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:23.591 [2024-07-24 21:58:29.197343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.591 21:58:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:23.591 21:58:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.591 21:58:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.591 21:58:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:23.849 Malloc1 00:18:24.107 21:58:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:24.107 21:58:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:24.365 21:58:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.622 [2024-07-24 21:58:30.290129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.622 21:58:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:24.880 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:25.138 21:58:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:25.138 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:25.138 fio-3.35 00:18:25.138 Starting 1 thread 00:18:27.668 00:18:27.668 test: (groupid=0, jobs=1): err= 0: pid=90007: Wed Jul 24 21:58:33 2024 00:18:27.668 read: IOPS=8951, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2007msec) 00:18:27.668 slat (usec): min=2, max=330, avg= 2.81, stdev= 3.33 00:18:27.668 clat (usec): min=2576, max=13670, avg=7424.00, stdev=522.20 00:18:27.668 lat (usec): min=2616, max=13672, avg=7426.82, stdev=521.99 00:18:27.668 clat percentiles (usec): 00:18:27.668 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:18:27.668 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:18:27.668 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:18:27.668 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[12518], 00:18:27.668 | 99.99th=[13566] 00:18:27.668 bw ( KiB/s): min=34746, max=36240, per=99.96%, avg=35792.50, stdev=703.34, samples=4 00:18:27.668 iops : min= 8686, max= 9060, avg=8948.00, stdev=176.08, samples=4 00:18:27.668 write: IOPS=8974, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec); 0 zone resets 00:18:27.668 slat (usec): min=2, max=260, avg= 2.94, stdev= 2.46 00:18:27.668 clat (usec): min=2430, max=12536, avg=6795.60, stdev=473.10 00:18:27.668 lat (usec): min=2448, max=12539, avg=6798.53, stdev=473.09 00:18:27.668 clat percentiles (usec): 00:18:27.668 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:18:27.668 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6849], 00:18:27.668 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:18:27.668 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[10552], 99.95th=[11600], 00:18:27.668 | 99.99th=[12518] 00:18:27.668 bw ( KiB/s): min=35520, max=36296, per=99.93%, avg=35872.00, stdev=380.04, samples=4 00:18:27.668 iops : min= 8880, max= 9074, avg=8968.00, stdev=95.01, samples=4 00:18:27.668 lat (msec) : 4=0.08%, 10=99.79%, 20=0.13% 00:18:27.668 cpu : usr=69.09%, sys=22.38%, ctx=6, majf=0, minf=6 00:18:27.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:27.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.668 issued rwts: total=17966,18011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.668 00:18:27.668 Run status group 0 (all jobs): 00:18:27.668 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2007-2007msec 00:18:27.668 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:27.668 21:58:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:27.668 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:27.668 fio-3.35 00:18:27.668 Starting 1 thread 00:18:30.198 00:18:30.198 test: (groupid=0, jobs=1): err= 0: pid=90057: Wed Jul 24 21:58:35 2024 00:18:30.198 read: IOPS=8051, BW=126MiB/s (132MB/s)(253MiB/2008msec) 00:18:30.198 slat (usec): min=3, max=118, avg= 3.89, stdev= 2.38 00:18:30.198 clat (usec): min=1763, max=18629, avg=8863.90, stdev=2902.35 00:18:30.198 lat (usec): min=1766, max=18633, avg=8867.79, stdev=2902.53 00:18:30.198 clat percentiles (usec): 00:18:30.198 | 1.00th=[ 4047], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 6194], 00:18:30.198 | 30.00th=[ 7046], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9241], 00:18:30.198 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12911], 95.00th=[14484], 00:18:30.198 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:18:30.198 | 99.99th=[18482] 00:18:30.198 bw ( KiB/s): min=60864, max=74368, per=51.59%, avg=66464.00, stdev=5700.25, samples=4 00:18:30.198 iops : min= 3804, max= 4648, avg=4154.00, stdev=356.27, samples=4 00:18:30.198 write: IOPS=4598, BW=71.9MiB/s (75.3MB/s)(135MiB/1882msec); 0 zone resets 00:18:30.198 slat (usec): min=32, max=276, avg=38.46, stdev= 6.71 00:18:30.198 clat (usec): min=5015, max=22270, avg=12362.05, stdev=2547.54 00:18:30.198 lat (usec): min=5051, max=22312, avg=12400.51, stdev=2548.32 00:18:30.198 clat percentiles (usec): 00:18:30.198 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:18:30.198 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:18:30.198 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15926], 95.00th=[17433], 00:18:30.198 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21365], 99.95th=[21627], 00:18:30.198 | 99.99th=[22152] 00:18:30.198 bw ( KiB/s): min=63648, max=77152, per=93.46%, avg=68768.00, stdev=5943.62, samples=4 00:18:30.198 iops : min= 3978, max= 4822, avg=4298.00, stdev=371.48, samples=4 00:18:30.198 lat (msec) : 2=0.01%, 4=0.59%, 10=49.03%, 20=50.04%, 50=0.33% 00:18:30.198 cpu : usr=81.62%, sys=13.84%, ctx=242, majf=0, minf=2 00:18:30.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:30.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.198 issued rwts: total=16167,8655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.198 00:18:30.198 Run status group 0 (all jobs): 00:18:30.198 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2008-2008msec 00:18:30.198 WRITE: bw=71.9MiB/s (75.3MB/s), 71.9MiB/s-71.9MiB/s (75.3MB/s-75.3MB/s), io=135MiB (142MB), run=1882-1882msec 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:30.198 21:58:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:18:30.456 Nvme0n1 00:18:30.456 21:58:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d4943880-0e55-4857-b929-90a9ada56d8d 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d4943880-0e55-4857-b929-90a9ada56d8d 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=d4943880-0e55-4857-b929-90a9ada56d8d 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:18:31.021 { 00:18:31.021 "uuid": "d4943880-0e55-4857-b929-90a9ada56d8d", 00:18:31.021 "name": "lvs_0", 00:18:31.021 "base_bdev": "Nvme0n1", 00:18:31.021 "total_data_clusters": 4, 00:18:31.021 "free_clusters": 4, 00:18:31.021 "block_size": 4096, 00:18:31.021 "cluster_size": 1073741824 00:18:31.021 } 00:18:31.021 ]' 00:18:31.021 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d4943880-0e55-4857-b929-90a9ada56d8d") .free_clusters' 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d4943880-0e55-4857-b929-90a9ada56d8d") .cluster_size' 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:18:31.278 4096 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:18:31.278 21:58:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:31.536 a1e071c3-027a-4366-abe8-29ef9633e411 00:18:31.536 21:58:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:31.793 21:58:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:32.050 21:58:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:32.308 21:58:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:32.566 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:32.566 fio-3.35 00:18:32.566 Starting 1 thread 00:18:35.092 00:18:35.092 test: (groupid=0, jobs=1): err= 0: pid=90160: Wed Jul 24 21:58:40 2024 00:18:35.092 read: IOPS=6479, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2008msec) 00:18:35.092 slat (usec): min=2, max=348, avg= 2.56, stdev= 3.86 00:18:35.092 clat (usec): min=2888, max=17965, avg=10306.52, stdev=836.61 00:18:35.092 lat (usec): min=2898, max=17967, avg=10309.08, stdev=836.27 00:18:35.092 clat percentiles (usec): 00:18:35.092 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:18:35.092 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:18:35.092 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:18:35.092 | 99.00th=[12125], 99.50th=[12518], 99.90th=[16057], 99.95th=[16450], 00:18:35.092 | 99.99th=[17957] 00:18:35.092 bw ( KiB/s): min=25232, max=26480, per=99.91%, avg=25894.00, stdev=522.61, samples=4 00:18:35.092 iops : min= 6308, max= 6620, avg=6473.50, stdev=130.65, samples=4 00:18:35.092 write: IOPS=6490, BW=25.4MiB/s (26.6MB/s)(50.9MiB/2008msec); 0 zone resets 00:18:35.092 slat (usec): min=2, max=249, avg= 2.66, stdev= 2.48 00:18:35.092 clat (usec): min=2414, max=16559, avg=9342.65, stdev=787.78 00:18:35.092 lat (usec): min=2428, max=16562, avg=9345.32, stdev=787.63 00:18:35.092 clat percentiles (usec): 00:18:35.092 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8717], 00:18:35.092 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:18:35.092 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:18:35.092 | 99.00th=[11076], 99.50th=[11469], 99.90th=[15008], 99.95th=[16057], 00:18:35.092 | 99.99th=[16581] 00:18:35.092 bw ( KiB/s): min=25600, max=26376, per=99.91%, avg=25938.00, stdev=380.39, samples=4 00:18:35.092 iops : min= 6400, max= 6594, avg=6484.50, stdev=95.10, samples=4 00:18:35.092 lat (msec) : 4=0.06%, 10=59.11%, 20=40.83% 00:18:35.092 cpu : usr=71.45%, sys=22.32%, ctx=17, majf=0, minf=6 00:18:35.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:35.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.092 issued rwts: total=13010,13033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.092 00:18:35.092 Run status group 0 (all jobs): 00:18:35.092 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2008-2008msec 00:18:35.092 WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=50.9MiB (53.4MB), run=2008-2008msec 00:18:35.092 21:58:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:35.092 21:58:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f4270e16-1bd6-42f9-be1c-1bec3947fff1 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f4270e16-1bd6-42f9-be1c-1bec3947fff1 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=f4270e16-1bd6-42f9-be1c-1bec3947fff1 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:18:35.350 21:58:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:18:35.609 { 00:18:35.609 "uuid": "d4943880-0e55-4857-b929-90a9ada56d8d", 00:18:35.609 "name": "lvs_0", 00:18:35.609 "base_bdev": "Nvme0n1", 00:18:35.609 "total_data_clusters": 4, 00:18:35.609 "free_clusters": 0, 00:18:35.609 "block_size": 4096, 00:18:35.609 "cluster_size": 1073741824 00:18:35.609 }, 00:18:35.609 { 00:18:35.609 "uuid": "f4270e16-1bd6-42f9-be1c-1bec3947fff1", 00:18:35.609 "name": "lvs_n_0", 00:18:35.609 "base_bdev": "a1e071c3-027a-4366-abe8-29ef9633e411", 00:18:35.609 "total_data_clusters": 1022, 00:18:35.609 "free_clusters": 1022, 00:18:35.609 "block_size": 4096, 00:18:35.609 "cluster_size": 4194304 00:18:35.609 } 00:18:35.609 ]' 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="f4270e16-1bd6-42f9-be1c-1bec3947fff1") .free_clusters' 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f4270e16-1bd6-42f9-be1c-1bec3947fff1") .cluster_size' 00:18:35.609 4088 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:18:35.609 21:58:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:35.867 e978ea73-ee97-4653-bd22-71580ccab840 00:18:35.867 21:58:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:36.126 21:58:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:36.385 21:58:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:36.644 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:36.645 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:36.645 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:36.645 21:58:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:36.903 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:36.903 fio-3.35 00:18:36.903 Starting 1 thread 00:18:39.435 00:18:39.435 test: (groupid=0, jobs=1): err= 0: pid=90239: Wed Jul 24 21:58:44 2024 00:18:39.435 read: IOPS=5664, BW=22.1MiB/s (23.2MB/s)(44.5MiB/2010msec) 00:18:39.435 slat (usec): min=2, max=326, avg= 2.78, stdev= 4.18 00:18:39.435 clat (usec): min=3241, max=21229, avg=11848.88, stdev=1011.71 00:18:39.435 lat (usec): min=3250, max=21232, avg=11851.66, stdev=1011.31 00:18:39.435 clat percentiles (usec): 00:18:39.435 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:18:39.435 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:18:39.435 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:18:39.435 | 99.00th=[14091], 99.50th=[14615], 99.90th=[19006], 99.95th=[20055], 00:18:39.435 | 99.99th=[21103] 00:18:39.435 bw ( KiB/s): min=21920, max=23320, per=99.91%, avg=22638.00, stdev=585.24, samples=4 00:18:39.435 iops : min= 5480, max= 5830, avg=5659.50, stdev=146.31, samples=4 00:18:39.435 write: IOPS=5634, BW=22.0MiB/s (23.1MB/s)(44.2MiB/2010msec); 0 zone resets 00:18:39.435 slat (usec): min=2, max=288, avg= 2.92, stdev= 3.15 00:18:39.435 clat (usec): min=2635, max=19797, avg=10709.44, stdev=942.39 00:18:39.435 lat (usec): min=2649, max=19801, avg=10712.36, stdev=942.16 00:18:39.435 clat percentiles (usec): 00:18:39.435 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:18:39.435 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:18:39.435 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:18:39.435 | 99.00th=[12780], 99.50th=[13304], 99.90th=[17433], 99.95th=[18744], 00:18:39.435 | 99.99th=[19792] 00:18:39.435 bw ( KiB/s): min=22144, max=22872, per=99.97%, avg=22530.00, stdev=300.07, samples=4 00:18:39.435 iops : min= 5536, max= 5718, avg=5632.50, stdev=75.02, samples=4 00:18:39.435 lat (msec) : 4=0.06%, 10=10.86%, 20=89.06%, 50=0.03% 00:18:39.435 cpu : usr=74.12%, sys=20.31%, ctx=4, majf=0, minf=6 00:18:39.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:39.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.435 issued rwts: total=11386,11325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.435 00:18:39.435 Run status group 0 (all jobs): 00:18:39.435 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.6MB), run=2010-2010msec 00:18:39.435 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.2MiB (46.4MB), run=2010-2010msec 00:18:39.435 21:58:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:39.435 21:58:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:39.435 21:58:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:39.694 21:58:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:39.952 21:58:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:40.211 21:58:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:40.469 21:58:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.405 rmmod nvme_tcp 00:18:41.405 rmmod nvme_fabrics 00:18:41.405 rmmod nvme_keyring 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 89925 ']' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 89925 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 89925 ']' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 89925 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89925 00:18:41.405 killing process with pid 89925 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89925' 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 89925 00:18:41.405 21:58:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 89925 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:41.664 ************************************ 00:18:41.664 END TEST nvmf_fio_host 00:18:41.664 ************************************ 00:18:41.664 00:18:41.664 real 0m19.828s 00:18:41.664 user 1m26.777s 00:18:41.664 sys 0m4.521s 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:41.664 21:58:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.664 21:58:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:41.664 21:58:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:41.664 21:58:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:41.664 21:58:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.664 ************************************ 00:18:41.664 START TEST nvmf_failover 00:18:41.664 ************************************ 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:41.664 * Looking for test storage... 00:18:41.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.664 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.923 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.924 Cannot find device "nvmf_tgt_br" 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.924 Cannot find device "nvmf_tgt_br2" 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.924 Cannot find device "nvmf_tgt_br" 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.924 Cannot find device "nvmf_tgt_br2" 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.924 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:42.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:42.183 00:18:42.183 --- 10.0.0.2 ping statistics --- 00:18:42.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.183 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:42.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:42.183 00:18:42.183 --- 10.0.0.3 ping statistics --- 00:18:42.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.183 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:42.183 00:18:42.183 --- 10.0.0.1 ping statistics --- 00:18:42.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.183 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=90469 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 90469 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 90469 ']' 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.183 21:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:42.183 [2024-07-24 21:58:47.798459] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:42.183 [2024-07-24 21:58:47.798834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.442 [2024-07-24 21:58:47.935027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.442 [2024-07-24 21:58:48.017452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.442 [2024-07-24 21:58:48.017784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.442 [2024-07-24 21:58:48.017892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.442 [2024-07-24 21:58:48.017967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.442 [2024-07-24 21:58:48.018102] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.442 [2024-07-24 21:58:48.018256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.442 [2024-07-24 21:58:48.018602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.442 [2024-07-24 21:58:48.018638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.442 [2024-07-24 21:58:48.074134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.377 21:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.377 [2024-07-24 21:58:48.998287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.377 21:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:43.635 Malloc0 00:18:43.635 21:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.892 21:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.149 21:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.406 [2024-07-24 21:58:49.940282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.406 21:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:44.665 [2024-07-24 21:58:50.160325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:44.665 21:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:44.665 [2024-07-24 21:58:50.380556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:44.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90527 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90527 /var/tmp/bdevperf.sock 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 90527 ']' 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.924 21:58:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.858 21:58:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.858 21:58:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:45.858 21:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:46.117 NVMe0n1 00:18:46.117 21:58:51 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:46.376 00:18:46.376 21:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.376 21:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90556 00:18:46.376 21:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:47.753 21:58:53 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.753 [2024-07-24 21:58:53.252449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.753 [2024-07-24 21:58:53.252923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.252996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.253961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.254068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.254140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.754 [2024-07-24 21:58:53.254209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.755 [2024-07-24 21:58:53.254260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.755 [2024-07-24 21:58:53.254400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.755 [2024-07-24 21:58:53.254524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6180 is same with the state(5) to be set 00:18:47.755 21:58:53 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:51.039 21:58:56 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:51.039 00:18:51.039 21:58:56 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:51.297 21:58:56 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:54.583 21:58:59 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.583 [2024-07-24 21:59:00.142786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.583 21:59:00 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:55.541 21:59:01 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:55.799 21:59:01 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 90556 00:19:02.364 0 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 90527 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 90527 ']' 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 90527 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90527 00:19:02.364 killing process with pid 90527 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90527' 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 90527 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 90527 00:19:02.364 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:02.364 [2024-07-24 21:58:50.441839] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:02.364 [2024-07-24 21:58:50.441948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90527 ] 00:19:02.364 [2024-07-24 21:58:50.600102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.364 [2024-07-24 21:58:50.700056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.364 [2024-07-24 21:58:50.759821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:02.364 Running I/O for 15 seconds... 00:19:02.364 [2024-07-24 21:58:53.254338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.365 [2024-07-24 21:58:53.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.365 [2024-07-24 21:58:53.254415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.365 [2024-07-24 21:58:53.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.365 [2024-07-24 21:58:53.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdba650 is same with the state(5) to be set 00:19:02.365 [2024-07-24 21:58:53.254712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.254978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.254993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.365 [2024-07-24 21:58:53.255741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.365 [2024-07-24 21:58:53.255756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.255969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.255984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.366 [2024-07-24 21:58:53.256946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.366 [2024-07-24 21:58:53.256960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.256975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.256995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.257966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.257980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.258015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.258044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.258072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.367 [2024-07-24 21:58:53.258100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.367 [2024-07-24 21:58:53.258129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.367 [2024-07-24 21:58:53.258144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:53.258534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.368 [2024-07-24 21:58:53.258562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddfc90 is same with the state(5) to be set 00:19:02.368 [2024-07-24 21:58:53.258592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.368 [2024-07-24 21:58:53.258603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.368 [2024-07-24 21:58:53.258624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:19:02.368 [2024-07-24 21:58:53.258638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:53.258695] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xddfc90 was disconnected and freed. reset controller. 00:19:02.368 [2024-07-24 21:58:53.258712] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:02.368 [2024-07-24 21:58:53.258731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.368 [2024-07-24 21:58:53.262555] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.368 [2024-07-24 21:58:53.262591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba650 (9): Bad file descriptor 00:19:02.368 [2024-07-24 21:58:53.300528] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.368 [2024-07-24 21:58:56.880225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.368 [2024-07-24 21:58:56.880779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.368 [2024-07-24 21:58:56.880806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.368 [2024-07-24 21:58:56.880836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.368 [2024-07-24 21:58:56.880863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.368 [2024-07-24 21:58:56.880888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.368 [2024-07-24 21:58:56.880922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.880938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.880951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.880966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.880980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.369 [2024-07-24 21:58:56.881767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.369 [2024-07-24 21:58:56.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.369 [2024-07-24 21:58:56.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.881923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.881936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.881950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.881963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.881977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.370 [2024-07-24 21:58:56.882918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.882979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.882994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.883007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.370 [2024-07-24 21:58:56.883021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.370 [2024-07-24 21:58:56.883033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.371 [2024-07-24 21:58:56.883369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.371 [2024-07-24 21:58:56.883809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1db0 is same with the state(5) to be set 00:19:02.371 [2024-07-24 21:58:56.883839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.883849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.883859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.883871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.883894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.883905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.883917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.883939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.883949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.883961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.883974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.883983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.884007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.884019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.884032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.884041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.884056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.884069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.884082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.884091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.884100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:19:02.371 [2024-07-24 21:58:56.884112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.371 [2024-07-24 21:58:56.884124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.371 [2024-07-24 21:58:56.884138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.371 [2024-07-24 21:58:56.884148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:19:02.372 [2024-07-24 21:58:56.884160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.372 [2024-07-24 21:58:56.884181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.372 [2024-07-24 21:58:56.884191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:19:02.372 [2024-07-24 21:58:56.884203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.372 [2024-07-24 21:58:56.884224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.372 [2024-07-24 21:58:56.884233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:19:02.372 [2024-07-24 21:58:56.884245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884300] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde1db0 was disconnected and freed. reset controller. 00:19:02.372 [2024-07-24 21:58:56.884315] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:02.372 [2024-07-24 21:58:56.884368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.372 [2024-07-24 21:58:56.884388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.372 [2024-07-24 21:58:56.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.372 [2024-07-24 21:58:56.884438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.372 [2024-07-24 21:58:56.884463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:58:56.884475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.372 [2024-07-24 21:58:56.888224] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.372 [2024-07-24 21:58:56.888322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba650 (9): Bad file descriptor 00:19:02.372 [2024-07-24 21:58:56.922788] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.372 [2024-07-24 21:59:01.407106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.407973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.372 [2024-07-24 21:59:01.407986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.408001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.408014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.408029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.408043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.372 [2024-07-24 21:59:01.408057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.372 [2024-07-24 21:59:01.408070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.373 [2024-07-24 21:59:01.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.408982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.408997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.409011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.409026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.409040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.409055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.409068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.373 [2024-07-24 21:59:01.409120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.373 [2024-07-24 21:59:01.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.409652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.409978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.409993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.410007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.410036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.410065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.410094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.374 [2024-07-24 21:59:01.410122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.374 [2024-07-24 21:59:01.410313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.374 [2024-07-24 21:59:01.410328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.375 [2024-07-24 21:59:01.410342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.375 [2024-07-24 21:59:01.410371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.375 [2024-07-24 21:59:01.410823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1c10 is same with the state(5) to be set 00:19:02.375 [2024-07-24 21:59:01.410854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.410864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35632 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.410889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.410912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.410929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36088 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.410944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.410957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.410967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.410977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36096 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.410990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36104 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36112 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36120 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36128 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36136 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.375 [2024-07-24 21:59:01.411249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.375 [2024-07-24 21:59:01.411259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36144 len:8 PRP1 0x0 PRP2 0x0 00:19:02.375 [2024-07-24 21:59:01.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411328] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde1c10 was disconnected and freed. reset controller. 00:19:02.375 [2024-07-24 21:59:01.411345] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:02.375 [2024-07-24 21:59:01.411416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.375 [2024-07-24 21:59:01.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.375 [2024-07-24 21:59:01.411465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.375 [2024-07-24 21:59:01.411503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.375 [2024-07-24 21:59:01.411531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.375 [2024-07-24 21:59:01.411544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.375 [2024-07-24 21:59:01.415356] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.375 [2024-07-24 21:59:01.415396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba650 (9): Bad file descriptor 00:19:02.375 [2024-07-24 21:59:01.450540] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.375 00:19:02.375 Latency(us) 00:19:02.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.376 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:02.376 Verification LBA range: start 0x0 length 0x4000 00:19:02.376 NVMe0n1 : 15.01 9164.94 35.80 215.67 0.00 13612.90 618.12 18350.08 00:19:02.376 =================================================================================================================== 00:19:02.376 Total : 9164.94 35.80 215.67 0.00 13612.90 618.12 18350.08 00:19:02.376 Received shutdown signal, test time was about 15.000000 seconds 00:19:02.376 00:19:02.376 Latency(us) 00:19:02.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.376 =================================================================================================================== 00:19:02.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90727 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90727 /var/tmp/bdevperf.sock 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 90727 ']' 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:02.376 21:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:02.941 21:59:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.941 21:59:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:19:02.941 21:59:08 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:02.941 [2024-07-24 21:59:08.628258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:02.941 21:59:08 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:03.199 [2024-07-24 21:59:08.864487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:03.199 21:59:08 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.766 NVMe0n1 00:19:03.766 21:59:09 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.766 00:19:04.023 21:59:09 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:04.281 00:19:04.281 21:59:09 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:04.281 21:59:09 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:04.539 21:59:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:04.539 21:59:10 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:07.851 21:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:07.851 21:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:07.851 21:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.851 21:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90805 00:19:07.851 21:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 90805 00:19:09.227 0 00:19:09.227 21:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:09.227 [2024-07-24 21:59:07.435981] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:09.227 [2024-07-24 21:59:07.436125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90727 ] 00:19:09.227 [2024-07-24 21:59:07.570092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.227 [2024-07-24 21:59:07.649182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.227 [2024-07-24 21:59:07.704183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:09.227 [2024-07-24 21:59:10.225959] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:09.228 [2024-07-24 21:59:10.226468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.228 [2024-07-24 21:59:10.226580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.228 [2024-07-24 21:59:10.226708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.228 [2024-07-24 21:59:10.226793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.228 [2024-07-24 21:59:10.226908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.228 [2024-07-24 21:59:10.226988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.228 [2024-07-24 21:59:10.227071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.228 [2024-07-24 21:59:10.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.228 [2024-07-24 21:59:10.227232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.228 [2024-07-24 21:59:10.227353] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.228 [2024-07-24 21:59:10.227454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2096650 (9): Bad file descriptor 00:19:09.228 [2024-07-24 21:59:10.233969] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.228 Running I/O for 1 seconds... 00:19:09.228 00:19:09.228 Latency(us) 00:19:09.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.228 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:09.228 Verification LBA range: start 0x0 length 0x4000 00:19:09.228 NVMe0n1 : 1.00 7074.10 27.63 0.00 0.00 17996.11 1869.27 14715.81 00:19:09.228 =================================================================================================================== 00:19:09.228 Total : 7074.10 27.63 0.00 0.00 17996.11 1869.27 14715.81 00:19:09.228 21:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.228 21:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:09.228 21:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:09.486 21:59:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:09.486 21:59:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.744 21:59:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.003 21:59:15 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 90727 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 90727 ']' 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 90727 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:13.362 21:59:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90727 00:19:13.362 killing process with pid 90727 00:19:13.362 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:13.362 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:13.362 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90727' 00:19:13.362 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 90727 00:19:13.362 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 90727 00:19:13.620 21:59:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:13.620 21:59:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.878 21:59:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.879 rmmod nvme_tcp 00:19:13.879 rmmod nvme_fabrics 00:19:13.879 rmmod nvme_keyring 00:19:13.879 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 90469 ']' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 90469 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 90469 ']' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 90469 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90469 00:19:14.138 killing process with pid 90469 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90469' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 90469 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 90469 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.138 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.397 21:59:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:14.397 ************************************ 00:19:14.397 END TEST nvmf_failover 00:19:14.397 ************************************ 00:19:14.397 00:19:14.397 real 0m32.607s 00:19:14.397 user 2m5.967s 00:19:14.397 sys 0m5.886s 00:19:14.397 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:14.397 21:59:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:14.397 21:59:19 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:14.397 21:59:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.397 21:59:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.397 21:59:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.397 ************************************ 00:19:14.397 START TEST nvmf_host_discovery 00:19:14.397 ************************************ 00:19:14.397 21:59:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:14.397 * Looking for test storage... 00:19:14.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.397 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:14.398 Cannot find device "nvmf_tgt_br" 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:14.398 Cannot find device "nvmf_tgt_br2" 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:14.398 Cannot find device "nvmf_tgt_br" 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:19:14.398 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:14.656 Cannot find device "nvmf_tgt_br2" 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:14.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:14.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:14.656 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:14.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:14.914 00:19:14.914 --- 10.0.0.2 ping statistics --- 00:19:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.914 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:14.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:14.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:19:14.914 00:19:14.914 --- 10.0.0.3 ping statistics --- 00:19:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.914 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:14.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:14.914 00:19:14.914 --- 10.0.0.1 ping statistics --- 00:19:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.914 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=91064 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 91064 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 91064 ']' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.914 21:59:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.914 [2024-07-24 21:59:20.476222] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:14.914 [2024-07-24 21:59:20.476316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.914 [2024-07-24 21:59:20.615755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.174 [2024-07-24 21:59:20.697408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.174 [2024-07-24 21:59:20.697465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.174 [2024-07-24 21:59:20.697479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.174 [2024-07-24 21:59:20.697490] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.174 [2024-07-24 21:59:20.697499] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.174 [2024-07-24 21:59:20.697527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.174 [2024-07-24 21:59:20.756336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 [2024-07-24 21:59:21.517685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 [2024-07-24 21:59:21.525834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 null0 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 null1 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91102 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91102 /tmp/host.sock 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 91102 ']' 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.112 21:59:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 [2024-07-24 21:59:21.610438] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:16.112 [2024-07-24 21:59:21.610763] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91102 ] 00:19:16.112 [2024-07-24 21:59:21.751804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.370 [2024-07-24 21:59:21.848941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.370 [2024-07-24 21:59:21.908429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:16.999 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.257 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.258 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.515 [2024-07-24 21:59:22.994339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.515 21:59:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:17.515 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.516 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.774 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:19:17.774 21:59:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:19:18.032 [2024-07-24 21:59:23.630723] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:18.032 [2024-07-24 21:59:23.630749] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:18.032 [2024-07-24 21:59:23.630769] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:18.032 [2024-07-24 21:59:23.636787] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:18.032 [2024-07-24 21:59:23.693878] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:18.032 [2024-07-24 21:59:23.694077] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:18.598 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:18.599 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.857 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.858 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.116 [2024-07-24 21:59:24.595783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:19.116 [2024-07-24 21:59:24.596268] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:19.116 [2024-07-24 21:59:24.596296] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:19.116 [2024-07-24 21:59:24.602297] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cn 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:19.116 ode0:10.0.0.2:4421 new path for nvme0 00:19:19.116 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.117 [2024-07-24 21:59:24.663579] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:19.117 [2024-07-24 21:59:24.663603] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:19.117 [2024-07-24 21:59:24.663610] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 [2024-07-24 21:59:24.816658] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:19.117 [2024-07-24 21:59:24.816695] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.117 [2024-07-24 21:59:24.821004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.117 [2024-07-24 21:59:24.821039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.117 [2024-07-24 21:59:24.821053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.117 [2024-07-24 21:59:24.821063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.117 [2024-07-24 21:59:24.821073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.117 [2024-07-24 21:59:24.821082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.117 [2024-07-24 21:59:24.821092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.117 [2024-07-24 21:59:24.821101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.117 [2024-07-24 21:59:24.821111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3af0 is same with the state(5) to be set 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:19.117 [2024-07-24 21:59:24.822656] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:19.117 [2024-07-24 21:59:24.822683] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:19.117 [2024-07-24 21:59:24.822781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed3af0 (9): Bad file descriptor 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.117 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.375 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.375 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.375 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.375 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.375 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 21:59:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.376 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.634 21:59:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.567 [2024-07-24 21:59:26.241592] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:20.567 [2024-07-24 21:59:26.241640] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:20.567 [2024-07-24 21:59:26.241675] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:20.567 [2024-07-24 21:59:26.247622] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:20.825 [2024-07-24 21:59:26.307234] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:20.825 [2024-07-24 21:59:26.307280] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.825 request: 00:19:20.825 { 00:19:20.825 "name": "nvme", 00:19:20.825 "trtype": "tcp", 00:19:20.825 "traddr": "10.0.0.2", 00:19:20.825 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:20.825 "adrfam": "ipv4", 00:19:20.825 "trsvcid": "8009", 00:19:20.825 "wait_for_attach": true, 00:19:20.825 "method": "bdev_nvme_start_discovery", 00:19:20.825 "req_id": 1 00:19:20.825 } 00:19:20.825 Got JSON-RPC error response 00:19:20.825 response: 00:19:20.825 { 00:19:20.825 "code": -17, 00:19:20.825 "message": "File exists" 00:19:20.825 } 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.825 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.826 request: 00:19:20.826 { 00:19:20.826 "name": "nvme_second", 00:19:20.826 "trtype": "tcp", 00:19:20.826 "traddr": "10.0.0.2", 00:19:20.826 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:20.826 "adrfam": "ipv4", 00:19:20.826 "trsvcid": "8009", 00:19:20.826 "wait_for_attach": true, 00:19:20.826 "method": "bdev_nvme_start_discovery", 00:19:20.826 "req_id": 1 00:19:20.826 } 00:19:20.826 Got JSON-RPC error response 00:19:20.826 response: 00:19:20.826 { 00:19:20.826 "code": -17, 00:19:20.826 "message": "File exists" 00:19:20.826 } 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:20.826 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.084 21:59:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:22.017 [2024-07-24 21:59:27.565049] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.017 [2024-07-24 21:59:27.565125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0c040 with addr=10.0.0.2, port=8010 00:19:22.017 [2024-07-24 21:59:27.565152] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:22.017 [2024-07-24 21:59:27.565164] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:22.017 [2024-07-24 21:59:27.565173] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:22.951 [2024-07-24 21:59:28.565062] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.951 [2024-07-24 21:59:28.565135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0c040 with addr=10.0.0.2, port=8010 00:19:22.951 [2024-07-24 21:59:28.565175] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:22.951 [2024-07-24 21:59:28.565201] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:22.951 [2024-07-24 21:59:28.565210] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:23.885 [2024-07-24 21:59:29.564902] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:23.885 request: 00:19:23.885 { 00:19:23.885 "name": "nvme_second", 00:19:23.885 "trtype": "tcp", 00:19:23.885 "traddr": "10.0.0.2", 00:19:23.885 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:23.885 "adrfam": "ipv4", 00:19:23.885 "trsvcid": "8010", 00:19:23.885 "attach_timeout_ms": 3000, 00:19:23.885 "method": "bdev_nvme_start_discovery", 00:19:23.885 "req_id": 1 00:19:23.885 } 00:19:23.885 Got JSON-RPC error response 00:19:23.885 response: 00:19:23.885 { 00:19:23.885 "code": -110, 00:19:23.885 "message": "Connection timed out" 00:19:23.885 } 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:23.885 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91102 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.144 rmmod nvme_tcp 00:19:24.144 rmmod nvme_fabrics 00:19:24.144 rmmod nvme_keyring 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 91064 ']' 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 91064 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 91064 ']' 00:19:24.144 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 91064 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91064 00:19:24.145 killing process with pid 91064 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91064' 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 91064 00:19:24.145 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 91064 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.404 21:59:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.404 21:59:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:24.404 00:19:24.404 real 0m10.090s 00:19:24.404 user 0m19.362s 00:19:24.404 sys 0m1.989s 00:19:24.404 ************************************ 00:19:24.404 END TEST nvmf_host_discovery 00:19:24.404 ************************************ 00:19:24.404 21:59:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:24.404 21:59:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.404 21:59:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:24.404 21:59:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:24.404 21:59:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:24.404 21:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.404 ************************************ 00:19:24.404 START TEST nvmf_host_multipath_status 00:19:24.404 ************************************ 00:19:24.404 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:24.663 * Looking for test storage... 00:19:24.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.663 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:24.664 Cannot find device "nvmf_tgt_br" 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.664 Cannot find device "nvmf_tgt_br2" 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:24.664 Cannot find device "nvmf_tgt_br" 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:24.664 Cannot find device "nvmf_tgt_br2" 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.664 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.922 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:24.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:24.923 00:19:24.923 --- 10.0.0.2 ping statistics --- 00:19:24.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.923 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:24.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:24.923 00:19:24.923 --- 10.0.0.3 ping statistics --- 00:19:24.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.923 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:24.923 00:19:24.923 --- 10.0.0.1 ping statistics --- 00:19:24.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.923 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=91547 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 91547 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 91547 ']' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.923 21:59:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:24.923 [2024-07-24 21:59:30.559440] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:24.923 [2024-07-24 21:59:30.559514] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.181 [2024-07-24 21:59:30.694776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:25.181 [2024-07-24 21:59:30.786774] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.181 [2024-07-24 21:59:30.787065] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.181 [2024-07-24 21:59:30.787292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.181 [2024-07-24 21:59:30.787468] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.181 [2024-07-24 21:59:30.787524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.181 [2024-07-24 21:59:30.787781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.181 [2024-07-24 21:59:30.787799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.181 [2024-07-24 21:59:30.847648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91547 00:19:26.115 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:26.115 [2024-07-24 21:59:31.814897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.389 21:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:26.389 Malloc0 00:19:26.389 21:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:26.658 21:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.915 21:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.172 [2024-07-24 21:59:32.800931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.172 21:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:27.430 [2024-07-24 21:59:33.017084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:27.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91603 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91603 /var/tmp/bdevperf.sock 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 91603 ']' 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:27.430 21:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:28.804 21:59:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:28.804 21:59:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:28.804 21:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:28.804 21:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:29.061 Nvme0n1 00:19:29.061 21:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:29.319 Nvme0n1 00:19:29.319 21:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:29.319 21:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.849 21:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:31.849 21:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:31.849 21:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:31.849 21:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.224 21:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:33.482 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:33.482 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:33.482 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.482 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:33.740 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.740 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:33.740 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.740 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:33.997 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.997 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:33.997 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:33.997 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.255 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.255 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:34.255 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.255 21:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:34.512 21:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.512 21:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:34.512 21:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:34.770 21:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:35.028 21:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:35.971 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:35.971 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:35.971 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.971 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:36.231 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.231 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:36.231 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:36.231 21:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.488 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.488 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:36.488 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:36.488 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.746 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.746 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:36.746 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.746 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:37.005 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.005 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:37.005 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:37.005 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.264 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.264 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:37.264 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.264 21:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:37.522 21:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.522 21:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:37.522 21:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:37.780 21:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:38.038 21:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:38.972 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:38.972 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:38.972 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.972 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:39.230 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.230 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:39.230 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.231 21:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:39.490 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.490 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:39.490 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.490 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.056 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:40.314 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.314 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:40.314 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.314 21:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:40.571 21:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.571 21:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:40.572 21:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:40.830 21:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:41.093 21:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:42.042 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:42.042 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:42.042 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.042 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:42.300 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.300 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:42.300 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:42.300 21:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.558 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:42.558 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:42.558 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:42.558 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.816 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.816 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:42.816 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.816 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:43.075 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.075 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:43.075 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:43.075 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.333 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.333 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:43.333 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.333 21:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:43.592 21:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:43.592 21:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:43.592 21:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:43.851 21:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:44.108 21:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:45.041 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:45.041 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:45.041 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.041 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:45.299 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:45.299 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:45.299 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.299 21:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:45.557 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:45.557 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:45.557 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:45.557 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.815 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:45.815 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:45.815 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.815 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:46.074 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.074 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:46.074 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.074 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:46.332 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.332 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:46.332 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:46.332 21:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.590 21:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.590 21:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:46.590 21:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:46.849 21:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:47.107 21:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:48.042 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:48.042 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:48.042 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.042 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:48.300 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:48.300 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:48.300 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.300 21:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:48.558 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.558 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:48.558 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.558 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:48.816 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.816 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:48.816 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.816 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.074 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.074 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:49.074 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:49.074 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.333 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.333 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:49.333 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.333 21:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:49.592 21:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.592 21:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:49.850 21:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:49.850 21:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:50.108 21:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:50.367 21:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.753 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.033 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.034 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.291 21:59:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.550 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.550 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.550 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.550 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.808 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.808 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:52.808 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.808 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.066 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.066 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:53.066 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:53.324 21:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:53.324 21:59:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.699 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:54.957 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.957 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:54.957 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.957 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.215 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.215 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.215 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.215 22:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.473 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.473 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.473 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.473 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:55.731 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.731 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:55.731 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.731 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:55.989 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.989 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:55.989 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:56.247 22:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:56.504 22:00:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:57.435 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:57.435 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:57.435 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.435 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.693 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.693 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:57.693 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.693 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:57.951 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.951 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:57.951 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.951 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.208 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.208 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.208 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.208 22:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.466 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.466 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:58.466 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.466 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:58.725 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.725 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:58.725 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.725 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:58.983 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.983 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:58.983 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:59.241 22:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:59.498 22:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:00.432 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:00.432 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:00.432 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.432 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.690 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.690 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:00.690 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.690 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.947 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.947 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.947 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.947 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.205 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.205 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.205 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:01.205 22:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.461 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.461 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:01.461 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.461 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.718 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.718 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:01.718 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.718 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91603 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 91603 ']' 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 91603 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91603 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:01.976 killing process with pid 91603 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91603' 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 91603 00:20:01.976 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 91603 00:20:01.976 Connection closed with partial response: 00:20:01.976 00:20:01.976 00:20:02.237 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91603 00:20:02.237 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.237 [2024-07-24 21:59:33.090568] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:02.237 [2024-07-24 21:59:33.090699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91603 ] 00:20:02.237 [2024-07-24 21:59:33.229897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.237 [2024-07-24 21:59:33.319505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.237 [2024-07-24 21:59:33.377306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:02.237 Running I/O for 90 seconds... 00:20:02.237 [2024-07-24 21:59:49.427344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.237 [2024-07-24 21:59:49.427778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.237 [2024-07-24 21:59:49.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.237 [2024-07-24 21:59:49.427850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.237 [2024-07-24 21:59:49.427871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.237 [2024-07-24 21:59:49.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.428922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.428966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.428988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.238 [2024-07-24 21:59:49.429556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.238 [2024-07-24 21:59:49.429885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.238 [2024-07-24 21:59:49.429906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.429921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.429942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.429958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.429979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.429995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.430488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.430983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.239 [2024-07-24 21:59:49.431261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:02.239 [2024-07-24 21:59:49.431552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.239 [2024-07-24 21:59:49.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.432659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.432956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.432972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 21:59:49.433270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 21:59:49.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 21:59:49.433661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.054995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.240 [2024-07-24 22:00:05.055776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:02.240 [2024-07-24 22:00:05.055845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.240 [2024-07-24 22:00:05.055859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.055880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.055894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.055914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.055948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.055962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.055982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.056934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.056969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.056991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.057005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.057043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.057084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.057105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.057127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.057143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.057164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.057179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.057205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.057221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.058505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.241 [2024-07-24 22:00:05.058549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.058585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.058636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.241 [2024-07-24 22:00:05.058689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.241 [2024-07-24 22:00:05.058711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.242 [2024-07-24 22:00:05.058727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.242 [2024-07-24 22:00:05.058763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.242 [2024-07-24 22:00:05.058799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.242 [2024-07-24 22:00:05.058835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.242 [2024-07-24 22:00:05.058872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.242 [2024-07-24 22:00:05.058908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.242 [2024-07-24 22:00:05.058945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:02.242 [2024-07-24 22:00:05.058967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.242 [2024-07-24 22:00:05.058983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:02.242 Received shutdown signal, test time was about 32.363485 seconds 00:20:02.242 00:20:02.242 Latency(us) 00:20:02.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.242 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.242 Verification LBA range: start 0x0 length 0x4000 00:20:02.242 Nvme0n1 : 32.36 8690.32 33.95 0.00 0.00 14697.83 195.49 4026531.84 00:20:02.242 =================================================================================================================== 00:20:02.242 Total : 8690.32 33.95 0.00 0.00 14697.83 195.49 4026531.84 00:20:02.242 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.500 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:02.500 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.500 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:02.500 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.500 22:00:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.500 rmmod nvme_tcp 00:20:02.500 rmmod nvme_fabrics 00:20:02.500 rmmod nvme_keyring 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 91547 ']' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 91547 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 91547 ']' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 91547 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91547 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:02.500 killing process with pid 91547 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91547' 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 91547 00:20:02.500 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 91547 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:02.757 00:20:02.757 real 0m38.303s 00:20:02.757 user 2m3.544s 00:20:02.757 sys 0m11.380s 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:02.757 ************************************ 00:20:02.757 END TEST nvmf_host_multipath_status 00:20:02.757 ************************************ 00:20:02.757 22:00:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:02.757 22:00:08 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:02.757 22:00:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:02.757 22:00:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:02.757 22:00:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.757 ************************************ 00:20:02.757 START TEST nvmf_discovery_remove_ifc 00:20:02.757 ************************************ 00:20:02.757 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:03.016 * Looking for test storage... 00:20:03.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.016 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:03.017 Cannot find device "nvmf_tgt_br" 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.017 Cannot find device "nvmf_tgt_br2" 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:03.017 Cannot find device "nvmf_tgt_br" 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:03.017 Cannot find device "nvmf_tgt_br2" 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.017 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:03.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:03.282 00:20:03.282 --- 10.0.0.2 ping statistics --- 00:20:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.282 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:03.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:03.282 00:20:03.282 --- 10.0.0.3 ping statistics --- 00:20:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.282 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:03.282 00:20:03.282 --- 10.0.0.1 ping statistics --- 00:20:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.282 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=92379 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 92379 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 92379 ']' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.282 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.283 22:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.283 [2024-07-24 22:00:08.922493] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:03.283 [2024-07-24 22:00:08.922633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.548 [2024-07-24 22:00:09.063063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.548 [2024-07-24 22:00:09.142691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.548 [2024-07-24 22:00:09.142756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.548 [2024-07-24 22:00:09.142767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.548 [2024-07-24 22:00:09.142775] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.548 [2024-07-24 22:00:09.142781] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.548 [2024-07-24 22:00:09.142804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.548 [2024-07-24 22:00:09.197815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 [2024-07-24 22:00:09.888536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.483 [2024-07-24 22:00:09.896674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:04.483 null0 00:20:04.483 [2024-07-24 22:00:09.928548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92410 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92410 /tmp/host.sock 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 92410 ']' 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:04.483 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:04.483 22:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.483 [2024-07-24 22:00:09.998997] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:04.483 [2024-07-24 22:00:09.999080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92410 ] 00:20:04.483 [2024-07-24 22:00:10.134707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.744 [2024-07-24 22:00:10.204039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.744 [2024-07-24 22:00:10.307300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.744 22:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.679 [2024-07-24 22:00:11.361541] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:05.679 [2024-07-24 22:00:11.361600] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:05.679 [2024-07-24 22:00:11.361642] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:05.679 [2024-07-24 22:00:11.367580] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:05.938 [2024-07-24 22:00:11.424019] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:05.938 [2024-07-24 22:00:11.424146] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:05.938 [2024-07-24 22:00:11.424176] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:05.938 [2024-07-24 22:00:11.424193] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:05.938 [2024-07-24 22:00:11.424220] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.938 [2024-07-24 22:00:11.430258] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ce64e0 was disconnected and freed. delete nvme_qpair. 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:05.938 22:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.874 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.133 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:07.133 22:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:08.069 22:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.026 22:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:10.438 22:00:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:11.372 22:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.372 [2024-07-24 22:00:16.851664] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:11.372 [2024-07-24 22:00:16.851735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.372 [2024-07-24 22:00:16.851752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.372 [2024-07-24 22:00:16.851765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.372 [2024-07-24 22:00:16.851775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.372 [2024-07-24 22:00:16.851786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.372 [2024-07-24 22:00:16.851795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.372 [2024-07-24 22:00:16.851806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.372 [2024-07-24 22:00:16.851815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.373 [2024-07-24 22:00:16.851825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:11.373 [2024-07-24 22:00:16.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.373 [2024-07-24 22:00:16.851843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc0e80 is same with the state(5) to be set 00:20:11.373 [2024-07-24 22:00:16.861661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc0e80 (9): Bad file descriptor 00:20:11.373 [2024-07-24 22:00:16.871679] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.309 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.309 [2024-07-24 22:00:17.886746] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:12.309 [2024-07-24 22:00:17.886891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc0e80 with addr=10.0.0.2, port=4420 00:20:12.309 [2024-07-24 22:00:17.886928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc0e80 is same with the state(5) to be set 00:20:12.310 [2024-07-24 22:00:17.886994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc0e80 (9): Bad file descriptor 00:20:12.310 [2024-07-24 22:00:17.887784] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:12.310 [2024-07-24 22:00:17.887833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:12.310 [2024-07-24 22:00:17.887853] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:12.310 [2024-07-24 22:00:17.887874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:12.310 [2024-07-24 22:00:17.887913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.310 [2024-07-24 22:00:17.887935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:12.310 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.310 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:12.310 22:00:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:13.243 [2024-07-24 22:00:18.887996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:13.243 [2024-07-24 22:00:18.888092] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:13.243 [2024-07-24 22:00:18.888120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:13.243 [2024-07-24 22:00:18.888131] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:13.243 [2024-07-24 22:00:18.888155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.243 [2024-07-24 22:00:18.888185] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:13.244 [2024-07-24 22:00:18.888240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.244 [2024-07-24 22:00:18.888257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.244 [2024-07-24 22:00:18.888271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.244 [2024-07-24 22:00:18.888281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.244 [2024-07-24 22:00:18.888291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.244 [2024-07-24 22:00:18.888301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.244 [2024-07-24 22:00:18.888311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.244 [2024-07-24 22:00:18.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.244 [2024-07-24 22:00:18.888330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:13.244 [2024-07-24 22:00:18.888340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:13.244 [2024-07-24 22:00:18.888349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:13.244 [2024-07-24 22:00:18.888384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8c440 (9): Bad file descriptor 00:20:13.244 [2024-07-24 22:00:18.889382] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:13.244 [2024-07-24 22:00:18.889405] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:13.244 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:13.502 22:00:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:13.502 22:00:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.502 22:00:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:13.502 22:00:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:14.529 22:00:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:15.465 [2024-07-24 22:00:20.896386] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:15.465 [2024-07-24 22:00:20.896416] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:15.465 [2024-07-24 22:00:20.896434] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:15.465 [2024-07-24 22:00:20.902440] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:15.465 [2024-07-24 22:00:20.957856] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:15.465 [2024-07-24 22:00:20.957906] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:15.465 [2024-07-24 22:00:20.957930] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:15.465 [2024-07-24 22:00:20.957946] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:15.465 [2024-07-24 22:00:20.957955] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:15.465 [2024-07-24 22:00:20.965163] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c977a0 was disconnected and freed. delete nvme_qpair. 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92410 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 92410 ']' 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 92410 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.465 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92410 00:20:15.722 killing process with pid 92410 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92410' 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 92410 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 92410 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.722 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.980 rmmod nvme_tcp 00:20:15.980 rmmod nvme_fabrics 00:20:15.980 rmmod nvme_keyring 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 92379 ']' 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 92379 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 92379 ']' 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 92379 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92379 00:20:15.980 killing process with pid 92379 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:15.980 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92379' 00:20:15.981 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 92379 00:20:15.981 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 92379 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:16.239 00:20:16.239 real 0m13.333s 00:20:16.239 user 0m22.844s 00:20:16.239 sys 0m2.340s 00:20:16.239 ************************************ 00:20:16.239 END TEST nvmf_discovery_remove_ifc 00:20:16.239 ************************************ 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:16.239 22:00:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:16.239 22:00:21 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:16.239 22:00:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:16.239 22:00:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:16.239 22:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.239 ************************************ 00:20:16.239 START TEST nvmf_identify_kernel_target 00:20:16.239 ************************************ 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:16.239 * Looking for test storage... 00:20:16.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.239 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:16.240 Cannot find device "nvmf_tgt_br" 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:20:16.240 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.498 Cannot find device "nvmf_tgt_br2" 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:16.498 Cannot find device "nvmf_tgt_br" 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:16.498 Cannot find device "nvmf_tgt_br2" 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:20:16.498 22:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.498 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:16.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:16.756 00:20:16.756 --- 10.0.0.2 ping statistics --- 00:20:16.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.756 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:16.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:16.756 00:20:16.756 --- 10.0.0.3 ping statistics --- 00:20:16.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.756 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:16.756 00:20:16.756 --- 10.0.0.1 ping statistics --- 00:20:16.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.756 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:16.756 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:17.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:17.015 Waiting for block devices as requested 00:20:17.015 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.273 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:17.273 No valid GPT data, bailing 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:17.273 22:00:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:17.531 No valid GPT data, bailing 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:17.531 No valid GPT data, bailing 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:17.531 No valid GPT data, bailing 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:17.531 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -a 10.0.0.1 -t tcp -s 4420 00:20:17.789 00:20:17.789 Discovery Log Number of Records 2, Generation counter 2 00:20:17.789 =====Discovery Log Entry 0====== 00:20:17.789 trtype: tcp 00:20:17.789 adrfam: ipv4 00:20:17.789 subtype: current discovery subsystem 00:20:17.789 treq: not specified, sq flow control disable supported 00:20:17.789 portid: 1 00:20:17.789 trsvcid: 4420 00:20:17.789 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:17.789 traddr: 10.0.0.1 00:20:17.789 eflags: none 00:20:17.789 sectype: none 00:20:17.789 =====Discovery Log Entry 1====== 00:20:17.789 trtype: tcp 00:20:17.789 adrfam: ipv4 00:20:17.789 subtype: nvme subsystem 00:20:17.789 treq: not specified, sq flow control disable supported 00:20:17.789 portid: 1 00:20:17.789 trsvcid: 4420 00:20:17.789 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:17.789 traddr: 10.0.0.1 00:20:17.789 eflags: none 00:20:17.789 sectype: none 00:20:17.789 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:17.789 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:17.789 ===================================================== 00:20:17.789 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:17.789 ===================================================== 00:20:17.789 Controller Capabilities/Features 00:20:17.789 ================================ 00:20:17.789 Vendor ID: 0000 00:20:17.789 Subsystem Vendor ID: 0000 00:20:17.789 Serial Number: 9710e9cc770c249e44bf 00:20:17.789 Model Number: Linux 00:20:17.789 Firmware Version: 6.7.0-68 00:20:17.789 Recommended Arb Burst: 0 00:20:17.789 IEEE OUI Identifier: 00 00 00 00:20:17.789 Multi-path I/O 00:20:17.789 May have multiple subsystem ports: No 00:20:17.789 May have multiple controllers: No 00:20:17.789 Associated with SR-IOV VF: No 00:20:17.789 Max Data Transfer Size: Unlimited 00:20:17.789 Max Number of Namespaces: 0 00:20:17.789 Max Number of I/O Queues: 1024 00:20:17.789 NVMe Specification Version (VS): 1.3 00:20:17.789 NVMe Specification Version (Identify): 1.3 00:20:17.789 Maximum Queue Entries: 1024 00:20:17.789 Contiguous Queues Required: No 00:20:17.789 Arbitration Mechanisms Supported 00:20:17.789 Weighted Round Robin: Not Supported 00:20:17.789 Vendor Specific: Not Supported 00:20:17.789 Reset Timeout: 7500 ms 00:20:17.789 Doorbell Stride: 4 bytes 00:20:17.789 NVM Subsystem Reset: Not Supported 00:20:17.789 Command Sets Supported 00:20:17.789 NVM Command Set: Supported 00:20:17.789 Boot Partition: Not Supported 00:20:17.789 Memory Page Size Minimum: 4096 bytes 00:20:17.789 Memory Page Size Maximum: 4096 bytes 00:20:17.789 Persistent Memory Region: Not Supported 00:20:17.789 Optional Asynchronous Events Supported 00:20:17.789 Namespace Attribute Notices: Not Supported 00:20:17.789 Firmware Activation Notices: Not Supported 00:20:17.789 ANA Change Notices: Not Supported 00:20:17.789 PLE Aggregate Log Change Notices: Not Supported 00:20:17.789 LBA Status Info Alert Notices: Not Supported 00:20:17.789 EGE Aggregate Log Change Notices: Not Supported 00:20:17.789 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.789 Zone Descriptor Change Notices: Not Supported 00:20:17.789 Discovery Log Change Notices: Supported 00:20:17.789 Controller Attributes 00:20:17.789 128-bit Host Identifier: Not Supported 00:20:17.789 Non-Operational Permissive Mode: Not Supported 00:20:17.789 NVM Sets: Not Supported 00:20:17.789 Read Recovery Levels: Not Supported 00:20:17.789 Endurance Groups: Not Supported 00:20:17.789 Predictable Latency Mode: Not Supported 00:20:17.789 Traffic Based Keep ALive: Not Supported 00:20:17.789 Namespace Granularity: Not Supported 00:20:17.789 SQ Associations: Not Supported 00:20:17.789 UUID List: Not Supported 00:20:17.789 Multi-Domain Subsystem: Not Supported 00:20:17.789 Fixed Capacity Management: Not Supported 00:20:17.789 Variable Capacity Management: Not Supported 00:20:17.789 Delete Endurance Group: Not Supported 00:20:17.789 Delete NVM Set: Not Supported 00:20:17.789 Extended LBA Formats Supported: Not Supported 00:20:17.789 Flexible Data Placement Supported: Not Supported 00:20:17.789 00:20:17.789 Controller Memory Buffer Support 00:20:17.789 ================================ 00:20:17.789 Supported: No 00:20:17.789 00:20:17.789 Persistent Memory Region Support 00:20:17.789 ================================ 00:20:17.789 Supported: No 00:20:17.789 00:20:17.789 Admin Command Set Attributes 00:20:17.789 ============================ 00:20:17.789 Security Send/Receive: Not Supported 00:20:17.789 Format NVM: Not Supported 00:20:17.789 Firmware Activate/Download: Not Supported 00:20:17.789 Namespace Management: Not Supported 00:20:17.789 Device Self-Test: Not Supported 00:20:17.789 Directives: Not Supported 00:20:17.789 NVMe-MI: Not Supported 00:20:17.789 Virtualization Management: Not Supported 00:20:17.789 Doorbell Buffer Config: Not Supported 00:20:17.789 Get LBA Status Capability: Not Supported 00:20:17.789 Command & Feature Lockdown Capability: Not Supported 00:20:17.789 Abort Command Limit: 1 00:20:17.789 Async Event Request Limit: 1 00:20:17.789 Number of Firmware Slots: N/A 00:20:17.789 Firmware Slot 1 Read-Only: N/A 00:20:17.789 Firmware Activation Without Reset: N/A 00:20:17.789 Multiple Update Detection Support: N/A 00:20:17.789 Firmware Update Granularity: No Information Provided 00:20:17.789 Per-Namespace SMART Log: No 00:20:17.789 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.789 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:17.789 Command Effects Log Page: Not Supported 00:20:17.789 Get Log Page Extended Data: Supported 00:20:17.789 Telemetry Log Pages: Not Supported 00:20:17.789 Persistent Event Log Pages: Not Supported 00:20:17.789 Supported Log Pages Log Page: May Support 00:20:17.789 Commands Supported & Effects Log Page: Not Supported 00:20:17.789 Feature Identifiers & Effects Log Page:May Support 00:20:17.789 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.789 Data Area 4 for Telemetry Log: Not Supported 00:20:17.789 Error Log Page Entries Supported: 1 00:20:17.789 Keep Alive: Not Supported 00:20:17.789 00:20:17.789 NVM Command Set Attributes 00:20:17.789 ========================== 00:20:17.789 Submission Queue Entry Size 00:20:17.789 Max: 1 00:20:17.789 Min: 1 00:20:17.789 Completion Queue Entry Size 00:20:17.789 Max: 1 00:20:17.789 Min: 1 00:20:17.789 Number of Namespaces: 0 00:20:17.789 Compare Command: Not Supported 00:20:17.789 Write Uncorrectable Command: Not Supported 00:20:17.789 Dataset Management Command: Not Supported 00:20:17.789 Write Zeroes Command: Not Supported 00:20:17.789 Set Features Save Field: Not Supported 00:20:17.789 Reservations: Not Supported 00:20:17.789 Timestamp: Not Supported 00:20:17.789 Copy: Not Supported 00:20:17.789 Volatile Write Cache: Not Present 00:20:17.789 Atomic Write Unit (Normal): 1 00:20:17.789 Atomic Write Unit (PFail): 1 00:20:17.789 Atomic Compare & Write Unit: 1 00:20:17.789 Fused Compare & Write: Not Supported 00:20:17.789 Scatter-Gather List 00:20:17.789 SGL Command Set: Supported 00:20:17.789 SGL Keyed: Not Supported 00:20:17.789 SGL Bit Bucket Descriptor: Not Supported 00:20:17.789 SGL Metadata Pointer: Not Supported 00:20:17.789 Oversized SGL: Not Supported 00:20:17.789 SGL Metadata Address: Not Supported 00:20:17.789 SGL Offset: Supported 00:20:17.789 Transport SGL Data Block: Not Supported 00:20:17.789 Replay Protected Memory Block: Not Supported 00:20:17.789 00:20:17.789 Firmware Slot Information 00:20:17.789 ========================= 00:20:17.789 Active slot: 0 00:20:17.789 00:20:17.789 00:20:17.789 Error Log 00:20:17.789 ========= 00:20:17.789 00:20:17.789 Active Namespaces 00:20:17.789 ================= 00:20:17.789 Discovery Log Page 00:20:17.789 ================== 00:20:17.789 Generation Counter: 2 00:20:17.789 Number of Records: 2 00:20:17.789 Record Format: 0 00:20:17.789 00:20:17.789 Discovery Log Entry 0 00:20:17.789 ---------------------- 00:20:17.789 Transport Type: 3 (TCP) 00:20:17.789 Address Family: 1 (IPv4) 00:20:17.789 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:17.789 Entry Flags: 00:20:17.789 Duplicate Returned Information: 0 00:20:17.789 Explicit Persistent Connection Support for Discovery: 0 00:20:17.789 Transport Requirements: 00:20:17.789 Secure Channel: Not Specified 00:20:17.790 Port ID: 1 (0x0001) 00:20:17.790 Controller ID: 65535 (0xffff) 00:20:17.790 Admin Max SQ Size: 32 00:20:17.790 Transport Service Identifier: 4420 00:20:17.790 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:17.790 Transport Address: 10.0.0.1 00:20:17.790 Discovery Log Entry 1 00:20:17.790 ---------------------- 00:20:17.790 Transport Type: 3 (TCP) 00:20:17.790 Address Family: 1 (IPv4) 00:20:17.790 Subsystem Type: 2 (NVM Subsystem) 00:20:17.790 Entry Flags: 00:20:17.790 Duplicate Returned Information: 0 00:20:17.790 Explicit Persistent Connection Support for Discovery: 0 00:20:17.790 Transport Requirements: 00:20:17.790 Secure Channel: Not Specified 00:20:17.790 Port ID: 1 (0x0001) 00:20:17.790 Controller ID: 65535 (0xffff) 00:20:17.790 Admin Max SQ Size: 32 00:20:17.790 Transport Service Identifier: 4420 00:20:17.790 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:17.790 Transport Address: 10.0.0.1 00:20:17.790 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:18.047 get_feature(0x01) failed 00:20:18.047 get_feature(0x02) failed 00:20:18.047 get_feature(0x04) failed 00:20:18.047 ===================================================== 00:20:18.047 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:18.047 ===================================================== 00:20:18.047 Controller Capabilities/Features 00:20:18.047 ================================ 00:20:18.047 Vendor ID: 0000 00:20:18.047 Subsystem Vendor ID: 0000 00:20:18.047 Serial Number: f85cf291cf531c254944 00:20:18.047 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:18.047 Firmware Version: 6.7.0-68 00:20:18.047 Recommended Arb Burst: 6 00:20:18.047 IEEE OUI Identifier: 00 00 00 00:20:18.047 Multi-path I/O 00:20:18.047 May have multiple subsystem ports: Yes 00:20:18.047 May have multiple controllers: Yes 00:20:18.047 Associated with SR-IOV VF: No 00:20:18.047 Max Data Transfer Size: Unlimited 00:20:18.047 Max Number of Namespaces: 1024 00:20:18.047 Max Number of I/O Queues: 128 00:20:18.047 NVMe Specification Version (VS): 1.3 00:20:18.047 NVMe Specification Version (Identify): 1.3 00:20:18.047 Maximum Queue Entries: 1024 00:20:18.047 Contiguous Queues Required: No 00:20:18.047 Arbitration Mechanisms Supported 00:20:18.047 Weighted Round Robin: Not Supported 00:20:18.047 Vendor Specific: Not Supported 00:20:18.047 Reset Timeout: 7500 ms 00:20:18.047 Doorbell Stride: 4 bytes 00:20:18.047 NVM Subsystem Reset: Not Supported 00:20:18.047 Command Sets Supported 00:20:18.047 NVM Command Set: Supported 00:20:18.047 Boot Partition: Not Supported 00:20:18.047 Memory Page Size Minimum: 4096 bytes 00:20:18.047 Memory Page Size Maximum: 4096 bytes 00:20:18.047 Persistent Memory Region: Not Supported 00:20:18.047 Optional Asynchronous Events Supported 00:20:18.047 Namespace Attribute Notices: Supported 00:20:18.047 Firmware Activation Notices: Not Supported 00:20:18.047 ANA Change Notices: Supported 00:20:18.047 PLE Aggregate Log Change Notices: Not Supported 00:20:18.047 LBA Status Info Alert Notices: Not Supported 00:20:18.047 EGE Aggregate Log Change Notices: Not Supported 00:20:18.047 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.047 Zone Descriptor Change Notices: Not Supported 00:20:18.047 Discovery Log Change Notices: Not Supported 00:20:18.047 Controller Attributes 00:20:18.047 128-bit Host Identifier: Supported 00:20:18.047 Non-Operational Permissive Mode: Not Supported 00:20:18.047 NVM Sets: Not Supported 00:20:18.047 Read Recovery Levels: Not Supported 00:20:18.047 Endurance Groups: Not Supported 00:20:18.047 Predictable Latency Mode: Not Supported 00:20:18.047 Traffic Based Keep ALive: Supported 00:20:18.047 Namespace Granularity: Not Supported 00:20:18.047 SQ Associations: Not Supported 00:20:18.047 UUID List: Not Supported 00:20:18.047 Multi-Domain Subsystem: Not Supported 00:20:18.047 Fixed Capacity Management: Not Supported 00:20:18.047 Variable Capacity Management: Not Supported 00:20:18.047 Delete Endurance Group: Not Supported 00:20:18.047 Delete NVM Set: Not Supported 00:20:18.047 Extended LBA Formats Supported: Not Supported 00:20:18.047 Flexible Data Placement Supported: Not Supported 00:20:18.047 00:20:18.047 Controller Memory Buffer Support 00:20:18.047 ================================ 00:20:18.047 Supported: No 00:20:18.047 00:20:18.047 Persistent Memory Region Support 00:20:18.047 ================================ 00:20:18.047 Supported: No 00:20:18.047 00:20:18.047 Admin Command Set Attributes 00:20:18.047 ============================ 00:20:18.047 Security Send/Receive: Not Supported 00:20:18.047 Format NVM: Not Supported 00:20:18.047 Firmware Activate/Download: Not Supported 00:20:18.047 Namespace Management: Not Supported 00:20:18.047 Device Self-Test: Not Supported 00:20:18.047 Directives: Not Supported 00:20:18.047 NVMe-MI: Not Supported 00:20:18.047 Virtualization Management: Not Supported 00:20:18.047 Doorbell Buffer Config: Not Supported 00:20:18.047 Get LBA Status Capability: Not Supported 00:20:18.047 Command & Feature Lockdown Capability: Not Supported 00:20:18.047 Abort Command Limit: 4 00:20:18.047 Async Event Request Limit: 4 00:20:18.047 Number of Firmware Slots: N/A 00:20:18.047 Firmware Slot 1 Read-Only: N/A 00:20:18.047 Firmware Activation Without Reset: N/A 00:20:18.047 Multiple Update Detection Support: N/A 00:20:18.047 Firmware Update Granularity: No Information Provided 00:20:18.047 Per-Namespace SMART Log: Yes 00:20:18.047 Asymmetric Namespace Access Log Page: Supported 00:20:18.047 ANA Transition Time : 10 sec 00:20:18.047 00:20:18.047 Asymmetric Namespace Access Capabilities 00:20:18.047 ANA Optimized State : Supported 00:20:18.047 ANA Non-Optimized State : Supported 00:20:18.047 ANA Inaccessible State : Supported 00:20:18.047 ANA Persistent Loss State : Supported 00:20:18.047 ANA Change State : Supported 00:20:18.047 ANAGRPID is not changed : No 00:20:18.047 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:18.047 00:20:18.047 ANA Group Identifier Maximum : 128 00:20:18.047 Number of ANA Group Identifiers : 128 00:20:18.047 Max Number of Allowed Namespaces : 1024 00:20:18.048 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:18.048 Command Effects Log Page: Supported 00:20:18.048 Get Log Page Extended Data: Supported 00:20:18.048 Telemetry Log Pages: Not Supported 00:20:18.048 Persistent Event Log Pages: Not Supported 00:20:18.048 Supported Log Pages Log Page: May Support 00:20:18.048 Commands Supported & Effects Log Page: Not Supported 00:20:18.048 Feature Identifiers & Effects Log Page:May Support 00:20:18.048 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.048 Data Area 4 for Telemetry Log: Not Supported 00:20:18.048 Error Log Page Entries Supported: 128 00:20:18.048 Keep Alive: Supported 00:20:18.048 Keep Alive Granularity: 1000 ms 00:20:18.048 00:20:18.048 NVM Command Set Attributes 00:20:18.048 ========================== 00:20:18.048 Submission Queue Entry Size 00:20:18.048 Max: 64 00:20:18.048 Min: 64 00:20:18.048 Completion Queue Entry Size 00:20:18.048 Max: 16 00:20:18.048 Min: 16 00:20:18.048 Number of Namespaces: 1024 00:20:18.048 Compare Command: Not Supported 00:20:18.048 Write Uncorrectable Command: Not Supported 00:20:18.048 Dataset Management Command: Supported 00:20:18.048 Write Zeroes Command: Supported 00:20:18.048 Set Features Save Field: Not Supported 00:20:18.048 Reservations: Not Supported 00:20:18.048 Timestamp: Not Supported 00:20:18.048 Copy: Not Supported 00:20:18.048 Volatile Write Cache: Present 00:20:18.048 Atomic Write Unit (Normal): 1 00:20:18.048 Atomic Write Unit (PFail): 1 00:20:18.048 Atomic Compare & Write Unit: 1 00:20:18.048 Fused Compare & Write: Not Supported 00:20:18.048 Scatter-Gather List 00:20:18.048 SGL Command Set: Supported 00:20:18.048 SGL Keyed: Not Supported 00:20:18.048 SGL Bit Bucket Descriptor: Not Supported 00:20:18.048 SGL Metadata Pointer: Not Supported 00:20:18.048 Oversized SGL: Not Supported 00:20:18.048 SGL Metadata Address: Not Supported 00:20:18.048 SGL Offset: Supported 00:20:18.048 Transport SGL Data Block: Not Supported 00:20:18.048 Replay Protected Memory Block: Not Supported 00:20:18.048 00:20:18.048 Firmware Slot Information 00:20:18.048 ========================= 00:20:18.048 Active slot: 0 00:20:18.048 00:20:18.048 Asymmetric Namespace Access 00:20:18.048 =========================== 00:20:18.048 Change Count : 0 00:20:18.048 Number of ANA Group Descriptors : 1 00:20:18.048 ANA Group Descriptor : 0 00:20:18.048 ANA Group ID : 1 00:20:18.048 Number of NSID Values : 1 00:20:18.048 Change Count : 0 00:20:18.048 ANA State : 1 00:20:18.048 Namespace Identifier : 1 00:20:18.048 00:20:18.048 Commands Supported and Effects 00:20:18.048 ============================== 00:20:18.048 Admin Commands 00:20:18.048 -------------- 00:20:18.048 Get Log Page (02h): Supported 00:20:18.048 Identify (06h): Supported 00:20:18.048 Abort (08h): Supported 00:20:18.048 Set Features (09h): Supported 00:20:18.048 Get Features (0Ah): Supported 00:20:18.048 Asynchronous Event Request (0Ch): Supported 00:20:18.048 Keep Alive (18h): Supported 00:20:18.048 I/O Commands 00:20:18.048 ------------ 00:20:18.048 Flush (00h): Supported 00:20:18.048 Write (01h): Supported LBA-Change 00:20:18.048 Read (02h): Supported 00:20:18.048 Write Zeroes (08h): Supported LBA-Change 00:20:18.048 Dataset Management (09h): Supported 00:20:18.048 00:20:18.048 Error Log 00:20:18.048 ========= 00:20:18.048 Entry: 0 00:20:18.048 Error Count: 0x3 00:20:18.048 Submission Queue Id: 0x0 00:20:18.048 Command Id: 0x5 00:20:18.048 Phase Bit: 0 00:20:18.048 Status Code: 0x2 00:20:18.048 Status Code Type: 0x0 00:20:18.048 Do Not Retry: 1 00:20:18.048 Error Location: 0x28 00:20:18.048 LBA: 0x0 00:20:18.048 Namespace: 0x0 00:20:18.048 Vendor Log Page: 0x0 00:20:18.048 ----------- 00:20:18.048 Entry: 1 00:20:18.048 Error Count: 0x2 00:20:18.048 Submission Queue Id: 0x0 00:20:18.048 Command Id: 0x5 00:20:18.048 Phase Bit: 0 00:20:18.048 Status Code: 0x2 00:20:18.048 Status Code Type: 0x0 00:20:18.048 Do Not Retry: 1 00:20:18.048 Error Location: 0x28 00:20:18.048 LBA: 0x0 00:20:18.048 Namespace: 0x0 00:20:18.048 Vendor Log Page: 0x0 00:20:18.048 ----------- 00:20:18.048 Entry: 2 00:20:18.048 Error Count: 0x1 00:20:18.048 Submission Queue Id: 0x0 00:20:18.048 Command Id: 0x4 00:20:18.048 Phase Bit: 0 00:20:18.048 Status Code: 0x2 00:20:18.048 Status Code Type: 0x0 00:20:18.048 Do Not Retry: 1 00:20:18.048 Error Location: 0x28 00:20:18.048 LBA: 0x0 00:20:18.048 Namespace: 0x0 00:20:18.048 Vendor Log Page: 0x0 00:20:18.048 00:20:18.048 Number of Queues 00:20:18.048 ================ 00:20:18.048 Number of I/O Submission Queues: 128 00:20:18.048 Number of I/O Completion Queues: 128 00:20:18.048 00:20:18.048 ZNS Specific Controller Data 00:20:18.048 ============================ 00:20:18.048 Zone Append Size Limit: 0 00:20:18.048 00:20:18.048 00:20:18.048 Active Namespaces 00:20:18.048 ================= 00:20:18.048 get_feature(0x05) failed 00:20:18.048 Namespace ID:1 00:20:18.048 Command Set Identifier: NVM (00h) 00:20:18.048 Deallocate: Supported 00:20:18.048 Deallocated/Unwritten Error: Not Supported 00:20:18.048 Deallocated Read Value: Unknown 00:20:18.048 Deallocate in Write Zeroes: Not Supported 00:20:18.048 Deallocated Guard Field: 0xFFFF 00:20:18.048 Flush: Supported 00:20:18.048 Reservation: Not Supported 00:20:18.048 Namespace Sharing Capabilities: Multiple Controllers 00:20:18.048 Size (in LBAs): 1310720 (5GiB) 00:20:18.048 Capacity (in LBAs): 1310720 (5GiB) 00:20:18.048 Utilization (in LBAs): 1310720 (5GiB) 00:20:18.048 UUID: 03cba210-f8e6-435b-9e78-a092fee44924 00:20:18.048 Thin Provisioning: Not Supported 00:20:18.048 Per-NS Atomic Units: Yes 00:20:18.048 Atomic Boundary Size (Normal): 0 00:20:18.048 Atomic Boundary Size (PFail): 0 00:20:18.048 Atomic Boundary Offset: 0 00:20:18.048 NGUID/EUI64 Never Reused: No 00:20:18.048 ANA group ID: 1 00:20:18.048 Namespace Write Protected: No 00:20:18.048 Number of LBA Formats: 1 00:20:18.048 Current LBA Format: LBA Format #00 00:20:18.048 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:18.048 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.048 rmmod nvme_tcp 00:20:18.048 rmmod nvme_fabrics 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:18.048 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:18.305 22:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:18.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.872 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.130 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.130 00:20:19.130 real 0m2.852s 00:20:19.130 user 0m0.966s 00:20:19.130 sys 0m1.326s 00:20:19.130 22:00:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:19.130 ************************************ 00:20:19.130 END TEST nvmf_identify_kernel_target 00:20:19.130 22:00:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.130 ************************************ 00:20:19.130 22:00:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:19.130 22:00:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:19.130 22:00:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:19.130 22:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.130 ************************************ 00:20:19.130 START TEST nvmf_auth_host 00:20:19.130 ************************************ 00:20:19.130 22:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:19.130 * Looking for test storage... 00:20:19.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.130 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.130 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.131 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:19.389 Cannot find device "nvmf_tgt_br" 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.389 Cannot find device "nvmf_tgt_br2" 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:19.389 Cannot find device "nvmf_tgt_br" 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:19.389 Cannot find device "nvmf_tgt_br2" 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.389 22:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.389 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:19.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:19.650 00:20:19.650 --- 10.0.0.2 ping statistics --- 00:20:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.650 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:19.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:19.650 00:20:19.650 --- 10.0.0.3 ping statistics --- 00:20:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.650 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:19.650 00:20:19.650 --- 10.0.0.1 ping statistics --- 00:20:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.650 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=93278 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 93278 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 93278 ']' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.650 22:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a19e7f3132a2f9a9227187a37687ac6f 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tMF 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a19e7f3132a2f9a9227187a37687ac6f 0 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a19e7f3132a2f9a9227187a37687ac6f 0 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a19e7f3132a2f9a9227187a37687ac6f 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:20.595 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tMF 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tMF 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tMF 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a3b29d71d3312f1ee610c3e924c397540d836d4a09d7344a3b2ff8f884a93820 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.P7w 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a3b29d71d3312f1ee610c3e924c397540d836d4a09d7344a3b2ff8f884a93820 3 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a3b29d71d3312f1ee610c3e924c397540d836d4a09d7344a3b2ff8f884a93820 3 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a3b29d71d3312f1ee610c3e924c397540d836d4a09d7344a3b2ff8f884a93820 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.P7w 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.P7w 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P7w 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c43fa6c199841d17172fb60c72608e2bcebcc87726e2f96f 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:20.853 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Tz9 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c43fa6c199841d17172fb60c72608e2bcebcc87726e2f96f 0 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c43fa6c199841d17172fb60c72608e2bcebcc87726e2f96f 0 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c43fa6c199841d17172fb60c72608e2bcebcc87726e2f96f 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Tz9 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Tz9 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Tz9 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=30922681dc255f541e5ca793bbbb4a20459955860829e5c7 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UoE 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 30922681dc255f541e5ca793bbbb4a20459955860829e5c7 2 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 30922681dc255f541e5ca793bbbb4a20459955860829e5c7 2 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=30922681dc255f541e5ca793bbbb4a20459955860829e5c7 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UoE 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UoE 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.UoE 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f839ef980e94790897e1ee343032fe1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dXi 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f839ef980e94790897e1ee343032fe1 1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f839ef980e94790897e1ee343032fe1 1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f839ef980e94790897e1ee343032fe1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:20.854 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dXi 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dXi 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dXi 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e32c1cc28e0b0173a7e70e83ad9fa80 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zM1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e32c1cc28e0b0173a7e70e83ad9fa80 1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e32c1cc28e0b0173a7e70e83ad9fa80 1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e32c1cc28e0b0173a7e70e83ad9fa80 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zM1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zM1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zM1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c51a9572134e6d9b5e7e076e707cfe4253871cd2e5bac22 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1l5 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c51a9572134e6d9b5e7e076e707cfe4253871cd2e5bac22 2 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c51a9572134e6d9b5e7e076e707cfe4253871cd2e5bac22 2 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c51a9572134e6d9b5e7e076e707cfe4253871cd2e5bac22 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1l5 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1l5 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1l5 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:20:21.112 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a67b43b4c0c5d75f2ee0da569173b56 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ojA 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a67b43b4c0c5d75f2ee0da569173b56 0 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a67b43b4c0c5d75f2ee0da569173b56 0 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a67b43b4c0c5d75f2ee0da569173b56 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ojA 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ojA 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ojA 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=52fde0273ed640120992dc3a68692b31db5a20de08945207250cae48c6eba374 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KHV 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 52fde0273ed640120992dc3a68692b31db5a20de08945207250cae48c6eba374 3 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 52fde0273ed640120992dc3a68692b31db5a20de08945207250cae48c6eba374 3 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=52fde0273ed640120992dc3a68692b31db5a20de08945207250cae48c6eba374 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KHV 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KHV 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KHV 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93278 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 93278 ']' 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.113 22:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tMF 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P7w ]] 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P7w 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Tz9 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.UoE ]] 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.UoE 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.371 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dXi 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zM1 ]] 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zM1 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1l5 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.630 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ojA ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ojA 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KHV 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:21.631 22:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:21.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:21.890 Waiting for block devices as requested 00:20:21.890 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:22.148 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:22.714 No valid GPT data, bailing 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:22.714 No valid GPT data, bailing 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:22.714 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:22.972 No valid GPT data, bailing 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:22.972 No valid GPT data, bailing 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:22.972 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -a 10.0.0.1 -t tcp -s 4420 00:20:22.972 00:20:22.973 Discovery Log Number of Records 2, Generation counter 2 00:20:22.973 =====Discovery Log Entry 0====== 00:20:22.973 trtype: tcp 00:20:22.973 adrfam: ipv4 00:20:22.973 subtype: current discovery subsystem 00:20:22.973 treq: not specified, sq flow control disable supported 00:20:22.973 portid: 1 00:20:22.973 trsvcid: 4420 00:20:22.973 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:22.973 traddr: 10.0.0.1 00:20:22.973 eflags: none 00:20:22.973 sectype: none 00:20:22.973 =====Discovery Log Entry 1====== 00:20:22.973 trtype: tcp 00:20:22.973 adrfam: ipv4 00:20:22.973 subtype: nvme subsystem 00:20:22.973 treq: not specified, sq flow control disable supported 00:20:22.973 portid: 1 00:20:22.973 trsvcid: 4420 00:20:22.973 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:22.973 traddr: 10.0.0.1 00:20:22.973 eflags: none 00:20:22.973 sectype: none 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.973 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 nvme0n1 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.232 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.233 22:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.233 22:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.233 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.233 22:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.492 nvme0n1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.492 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 nvme0n1 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 nvme0n1 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.751 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.010 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 nvme0n1 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.011 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.269 nvme0n1 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.269 22:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.530 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.531 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.531 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.531 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 nvme0n1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 nvme0n1 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.804 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 nvme0n1 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.063 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.323 nvme0n1 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.323 22:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 nvme0n1 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.582 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.148 nvme0n1 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.148 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.406 22:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.406 nvme0n1 00:20:26.406 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.406 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.406 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.406 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.406 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.665 nvme0n1 00:20:26.665 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.925 nvme0n1 00:20:26.925 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.183 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.441 nvme0n1 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.441 22:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.339 nvme0n1 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.339 22:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.339 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.339 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.339 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.339 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.339 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.597 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 nvme0n1 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.856 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.113 nvme0n1 00:20:30.113 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.113 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.113 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.113 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.113 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.372 22:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.631 nvme0n1 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.631 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 nvme0n1 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.196 22:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.770 nvme0n1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.770 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.347 nvme0n1 00:20:32.347 22:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.347 22:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.347 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.347 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.347 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.348 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.348 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.348 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.348 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.348 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.606 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 nvme0n1 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 22:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.738 nvme0n1 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.738 22:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.302 nvme0n1 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.302 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.560 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.561 nvme0n1 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.561 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 nvme0n1 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 nvme0n1 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 nvme0n1 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.093 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 nvme0n1 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.351 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.610 nvme0n1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.610 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 nvme0n1 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 nvme0n1 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.869 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 nvme0n1 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.127 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 nvme0n1 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.386 22:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.386 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.644 nvme0n1 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.644 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.645 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.904 nvme0n1 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.904 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 nvme0n1 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.163 22:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.164 22:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:37.164 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.164 22:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.422 nvme0n1 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.422 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 nvme0n1 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.681 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:37.939 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.940 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.197 nvme0n1 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:38.197 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.198 22:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.455 nvme0n1 00:20:38.455 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.713 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.972 nvme0n1 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.972 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.539 nvme0n1 00:20:39.539 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.539 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.539 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.539 22:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.539 22:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.539 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 nvme0n1 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.798 22:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.363 nvme0n1 00:20:40.363 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.363 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.363 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.363 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.363 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.620 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.621 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 nvme0n1 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.188 22:00:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.754 nvme0n1 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.754 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.013 22:00:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.583 nvme0n1 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.583 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.584 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.149 nvme0n1 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.149 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 nvme0n1 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.408 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.409 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.667 nvme0n1 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.667 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.668 nvme0n1 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.668 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 nvme0n1 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.926 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.185 nvme0n1 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.185 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.186 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 nvme0n1 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.449 22:00:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 nvme0n1 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.707 nvme0n1 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.707 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:44.965 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 nvme0n1 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.966 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.224 nvme0n1 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.224 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.225 22:00:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.483 nvme0n1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.483 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.741 nvme0n1 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.741 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.000 nvme0n1 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.000 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 nvme0n1 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.259 22:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.517 nvme0n1 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.517 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.083 nvme0n1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.083 22:00:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.341 nvme0n1 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.341 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.599 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.600 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.857 nvme0n1 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.857 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 nvme0n1 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.423 22:00:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.682 nvme0n1 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5ZTdmMzEzMmEyZjlhOTIyNzE4N2EzNzY4N2FjNmYhc4l9: 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTNiMjlkNzFkMzMxMmYxZWU2MTBjM2U5MjRjMzk3NTQwZDgzNmQ0YTA5ZDczNDRhM2IyZmY4Zjg4NGE5MzgyMAKMlkM=: 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.682 22:00:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 nvme0n1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.618 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.185 nvme0n1 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY4MzllZjk4MGU5NDc5MDg5N2UxZWUzNDMwMzJmZTFVaicQ: 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmUzMmMxY2MyOGUwYjAxNzNhN2U3MGU4M2FkOWZhODCp8SOa: 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.185 22:00:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 nvme0n1 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGM1MWE5NTcyMTM0ZTZkOWI1ZTdlMDc2ZTcwN2NmZTQyNTM4NzFjZDJlNWJhYzIyhitLVw==: 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGE2N2I0M2I0YzBjNWQ3NWYyZWUwZGE1NjkxNzNiNTa9sV2H: 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.751 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.752 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.317 nvme0n1 00:20:51.317 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.317 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.317 22:00:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.317 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.317 22:00:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.317 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTJmZGUwMjczZWQ2NDAxMjA5OTJkYzNhNjg2OTJiMzFkYjVhMjBkZTA4OTQ1MjA3MjUwY2FlNDhjNmViYTM3NKLvk4U=: 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.578 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 nvme0n1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQzZmE2YzE5OTg0MWQxNzE3MmZiNjBjNzI2MDhlMmJjZWJjYzg3NzI2ZTJmOTZmXaQgiw==: 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA5MjI2ODFkYzI1NWY1NDFlNWNhNzkzYmJiYjRhMjA0NTk5NTU4NjA4MjllNWM3fBcxGw==: 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 request: 00:20:52.191 { 00:20:52.191 "name": "nvme0", 00:20:52.191 "trtype": "tcp", 00:20:52.191 "traddr": "10.0.0.1", 00:20:52.191 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.191 "adrfam": "ipv4", 00:20:52.191 "trsvcid": "4420", 00:20:52.191 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.191 "method": "bdev_nvme_attach_controller", 00:20:52.191 "req_id": 1 00:20:52.191 } 00:20:52.191 Got JSON-RPC error response 00:20:52.191 response: 00:20:52.191 { 00:20:52.191 "code": -5, 00:20:52.191 "message": "Input/output error" 00:20:52.191 } 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.191 request: 00:20:52.191 { 00:20:52.191 "name": "nvme0", 00:20:52.191 "trtype": "tcp", 00:20:52.191 "traddr": "10.0.0.1", 00:20:52.191 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.191 "adrfam": "ipv4", 00:20:52.191 "trsvcid": "4420", 00:20:52.191 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.191 "dhchap_key": "key2", 00:20:52.191 "method": "bdev_nvme_attach_controller", 00:20:52.191 "req_id": 1 00:20:52.191 } 00:20:52.191 Got JSON-RPC error response 00:20:52.191 response: 00:20:52.191 { 00:20:52.191 "code": -5, 00:20:52.191 "message": "Input/output error" 00:20:52.191 } 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.191 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.192 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.450 request: 00:20:52.450 { 00:20:52.450 "name": "nvme0", 00:20:52.450 "trtype": "tcp", 00:20:52.451 "traddr": "10.0.0.1", 00:20:52.451 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:52.451 "adrfam": "ipv4", 00:20:52.451 "trsvcid": "4420", 00:20:52.451 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:52.451 "dhchap_key": "key1", 00:20:52.451 "dhchap_ctrlr_key": "ckey2", 00:20:52.451 "method": "bdev_nvme_attach_controller", 00:20:52.451 "req_id": 1 00:20:52.451 } 00:20:52.451 Got JSON-RPC error response 00:20:52.451 response: 00:20:52.451 { 00:20:52.451 "code": -5, 00:20:52.451 "message": "Input/output error" 00:20:52.451 } 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.451 rmmod nvme_tcp 00:20:52.451 rmmod nvme_fabrics 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 93278 ']' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 93278 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 93278 ']' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 93278 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93278 00:20:52.451 killing process with pid 93278 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93278' 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 93278 00:20:52.451 22:00:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 93278 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:52.710 22:00:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.645 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.645 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:53.645 22:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tMF /tmp/spdk.key-null.Tz9 /tmp/spdk.key-sha256.dXi /tmp/spdk.key-sha384.1l5 /tmp/spdk.key-sha512.KHV /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:53.645 22:00:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.904 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:53.904 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:53.904 ************************************ 00:20:53.904 END TEST nvmf_auth_host 00:20:53.904 ************************************ 00:20:53.904 00:20:53.904 real 0m34.900s 00:20:53.904 user 0m31.651s 00:20:53.904 sys 0m3.672s 00:20:53.904 22:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.904 22:00:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.163 22:00:59 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:54.163 22:00:59 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.163 22:00:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:54.163 22:00:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:54.163 22:00:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.163 ************************************ 00:20:54.163 START TEST nvmf_digest 00:20:54.163 ************************************ 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:54.163 * Looking for test storage... 00:20:54.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:54.163 Cannot find device "nvmf_tgt_br" 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.163 Cannot find device "nvmf_tgt_br2" 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:54.163 Cannot find device "nvmf_tgt_br" 00:20:54.163 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:54.164 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:54.164 Cannot find device "nvmf_tgt_br2" 00:20:54.164 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:54.164 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:54.422 22:00:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:54.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:54.422 00:20:54.422 --- 10.0.0.2 ping statistics --- 00:20:54.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.422 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:54.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:20:54.422 00:20:54.422 --- 10.0.0.3 ping statistics --- 00:20:54.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.422 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:54.422 00:20:54.422 --- 10.0.0.1 ping statistics --- 00:20:54.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.422 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:54.422 ************************************ 00:20:54.422 START TEST nvmf_digest_clean 00:20:54.422 ************************************ 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.422 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=94844 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 94844 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 94844 ']' 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.680 22:01:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 [2024-07-24 22:01:00.217488] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:54.680 [2024-07-24 22:01:00.217588] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.680 [2024-07-24 22:01:00.352552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.938 [2024-07-24 22:01:00.450449] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.938 [2024-07-24 22:01:00.450516] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.938 [2024-07-24 22:01:00.450533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.938 [2024-07-24 22:01:00.450541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.938 [2024-07-24 22:01:00.450548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.938 [2024-07-24 22:01:00.450579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.503 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.503 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:55.503 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.503 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.503 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.761 [2024-07-24 22:01:01.321817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:55.761 null0 00:20:55.761 [2024-07-24 22:01:01.380499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.761 [2024-07-24 22:01:01.404781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94876 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94876 /var/tmp/bperf.sock 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 94876 ']' 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:55.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:55.761 22:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:55.761 [2024-07-24 22:01:01.464567] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:55.761 [2024-07-24 22:01:01.465027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94876 ] 00:20:56.020 [2024-07-24 22:01:01.606479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.020 [2024-07-24 22:01:01.701008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.955 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.955 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:56.955 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:56.955 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:56.955 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:57.215 [2024-07-24 22:01:02.779718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:57.215 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.215 22:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.473 nvme0n1 00:20:57.473 22:01:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.473 22:01:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.732 Running I/O for 2 seconds... 00:20:59.633 00:20:59.634 Latency(us) 00:20:59.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:59.634 nvme0n1 : 2.00 15531.03 60.67 0.00 0.00 8236.34 1906.50 19899.11 00:20:59.634 =================================================================================================================== 00:20:59.634 Total : 15531.03 60.67 0.00 0.00 8236.34 1906.50 19899.11 00:20:59.634 0 00:20:59.634 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:59.634 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:59.634 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:59.634 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:59.634 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:59.634 | select(.opcode=="crc32c") 00:20:59.634 | "\(.module_name) \(.executed)"' 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94876 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 94876 ']' 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 94876 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94876 00:20:59.893 killing process with pid 94876 00:20:59.893 Received shutdown signal, test time was about 2.000000 seconds 00:20:59.893 00:20:59.893 Latency(us) 00:20:59.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.893 =================================================================================================================== 00:20:59.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94876' 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 94876 00:20:59.893 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 94876 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94938 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94938 /var/tmp/bperf.sock 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 94938 ']' 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.151 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.152 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.152 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.152 22:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:00.152 [2024-07-24 22:01:05.836021] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:00.152 [2024-07-24 22:01:05.836277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.152 Zero copy mechanism will not be used. 00:21:00.152 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94938 ] 00:21:00.410 [2024-07-24 22:01:05.970053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.410 [2024-07-24 22:01:06.047051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.344 22:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.344 22:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:01.344 22:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:01.344 22:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:01.344 22:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:01.603 [2024-07-24 22:01:07.069973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:01.603 22:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.603 22:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.861 nvme0n1 00:21:01.861 22:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:01.861 22:01:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.861 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:01.861 Zero copy mechanism will not be used. 00:21:01.861 Running I/O for 2 seconds... 00:21:04.419 00:21:04.419 Latency(us) 00:21:04.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:04.419 nvme0n1 : 2.00 7936.90 992.11 0.00 0.00 2012.79 1720.32 6732.33 00:21:04.419 =================================================================================================================== 00:21:04.419 Total : 7936.90 992.11 0.00 0.00 2012.79 1720.32 6732.33 00:21:04.419 0 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:04.419 | select(.opcode=="crc32c") 00:21:04.419 | "\(.module_name) \(.executed)"' 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94938 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 94938 ']' 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 94938 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94938 00:21:04.419 killing process with pid 94938 00:21:04.419 Received shutdown signal, test time was about 2.000000 seconds 00:21:04.419 00:21:04.419 Latency(us) 00:21:04.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.419 =================================================================================================================== 00:21:04.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94938' 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 94938 00:21:04.419 22:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 94938 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94998 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94998 /var/tmp/bperf.sock 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 94998 ']' 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:04.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:04.419 22:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:04.419 [2024-07-24 22:01:10.072479] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:04.419 [2024-07-24 22:01:10.072768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94998 ] 00:21:04.678 [2024-07-24 22:01:10.204416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.678 [2024-07-24 22:01:10.277779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.614 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:05.615 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:05.615 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:05.615 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:05.615 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:05.615 [2024-07-24 22:01:11.314741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:05.872 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.872 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:06.131 nvme0n1 00:21:06.131 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:06.131 22:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:06.131 Running I/O for 2 seconds... 00:21:08.663 00:21:08.663 Latency(us) 00:21:08.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.663 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:08.663 nvme0n1 : 2.00 17235.40 67.33 0.00 0.00 7419.85 6523.81 15847.80 00:21:08.663 =================================================================================================================== 00:21:08.663 Total : 17235.40 67.33 0.00 0.00 7419.85 6523.81 15847.80 00:21:08.663 0 00:21:08.663 22:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:08.663 22:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:08.663 22:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:08.663 22:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:08.663 22:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:08.663 | select(.opcode=="crc32c") 00:21:08.663 | "\(.module_name) \(.executed)"' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94998 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 94998 ']' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 94998 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94998 00:21:08.663 killing process with pid 94998 00:21:08.663 Received shutdown signal, test time was about 2.000000 seconds 00:21:08.663 00:21:08.663 Latency(us) 00:21:08.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.663 =================================================================================================================== 00:21:08.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94998' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 94998 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 94998 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95053 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95053 /var/tmp/bperf.sock 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95053 ']' 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:08.663 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:08.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:08.664 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:08.664 22:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:08.664 [2024-07-24 22:01:14.365080] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:08.664 [2024-07-24 22:01:14.365390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95053 ] 00:21:08.664 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.664 Zero copy mechanism will not be used. 00:21:08.922 [2024-07-24 22:01:14.500736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.922 [2024-07-24 22:01:14.577569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.918 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:09.918 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:09.918 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:09.918 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:09.919 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:09.919 [2024-07-24 22:01:15.552144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:09.919 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.919 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:10.177 nvme0n1 00:21:10.436 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:10.436 22:01:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:10.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:10.436 Zero copy mechanism will not be used. 00:21:10.436 Running I/O for 2 seconds... 00:21:12.336 00:21:12.336 Latency(us) 00:21:12.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:12.336 nvme0n1 : 2.00 6417.88 802.24 0.00 0.00 2487.28 1846.92 5481.19 00:21:12.336 =================================================================================================================== 00:21:12.336 Total : 6417.88 802.24 0.00 0.00 2487.28 1846.92 5481.19 00:21:12.336 0 00:21:12.336 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:12.336 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:12.336 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:12.336 | select(.opcode=="crc32c") 00:21:12.336 | "\(.module_name) \(.executed)"' 00:21:12.336 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:12.594 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95053 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95053 ']' 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95053 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95053 00:21:12.852 killing process with pid 95053 00:21:12.852 Received shutdown signal, test time was about 2.000000 seconds 00:21:12.852 00:21:12.852 Latency(us) 00:21:12.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.852 =================================================================================================================== 00:21:12.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95053' 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95053 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95053 00:21:12.852 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94844 00:21:12.853 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 94844 ']' 00:21:12.853 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 94844 00:21:12.853 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:12.853 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:12.853 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94844 00:21:13.111 killing process with pid 94844 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94844' 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 94844 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 94844 00:21:13.111 00:21:13.111 real 0m18.659s 00:21:13.111 user 0m35.900s 00:21:13.111 sys 0m4.843s 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.111 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.111 ************************************ 00:21:13.111 END TEST nvmf_digest_clean 00:21:13.111 ************************************ 00:21:13.369 22:01:18 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:13.369 22:01:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:13.369 22:01:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:13.370 ************************************ 00:21:13.370 START TEST nvmf_digest_error 00:21:13.370 ************************************ 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=95136 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 95136 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95136 ']' 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:13.370 22:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.370 [2024-07-24 22:01:18.898751] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:13.370 [2024-07-24 22:01:18.899015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.370 [2024-07-24 22:01:19.031540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.628 [2024-07-24 22:01:19.099146] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.628 [2024-07-24 22:01:19.099199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.628 [2024-07-24 22:01:19.099226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.628 [2024-07-24 22:01:19.099234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.628 [2024-07-24 22:01:19.099241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.628 [2024-07-24 22:01:19.099264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.194 [2024-07-24 22:01:19.835757] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.194 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.194 [2024-07-24 22:01:19.897938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:14.453 null0 00:21:14.453 [2024-07-24 22:01:19.944000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.453 [2024-07-24 22:01:19.968105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95168 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95168 /var/tmp/bperf.sock 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95168 ']' 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.453 22:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.453 [2024-07-24 22:01:20.031170] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:14.453 [2024-07-24 22:01:20.031588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95168 ] 00:21:14.712 [2024-07-24 22:01:20.172451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.712 [2024-07-24 22:01:20.246754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.712 [2024-07-24 22:01:20.304576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.290 22:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.290 22:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:15.290 22:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.291 22:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.548 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.805 nvme0n1 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:15.806 22:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.063 Running I/O for 2 seconds... 00:21:16.063 [2024-07-24 22:01:21.629517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.629567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.647214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.647257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.647272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.664202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.664239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.664270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.680270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.680306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.696178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.696215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.696245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.712113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.712152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.712167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.727827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.727861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.727890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.743498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.743535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.743565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.759327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.759363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.759392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.063 [2024-07-24 22:01:21.775302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.063 [2024-07-24 22:01:21.775338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.063 [2024-07-24 22:01:21.775369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.322 [2024-07-24 22:01:21.792869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.322 [2024-07-24 22:01:21.792913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.322 [2024-07-24 22:01:21.792927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.322 [2024-07-24 22:01:21.808158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.322 [2024-07-24 22:01:21.808192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.322 [2024-07-24 22:01:21.808220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.322 [2024-07-24 22:01:21.823367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.322 [2024-07-24 22:01:21.823401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.322 [2024-07-24 22:01:21.823431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.322 [2024-07-24 22:01:21.839916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.839951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.839979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.855149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.855184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.855213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.871653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.871715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.871745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.887296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.887331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.887359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.902586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.902645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.902675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.919740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.919801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.935578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.935639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.935669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.950735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.950769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.950782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.966689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.966752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.966766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.982558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.982619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:21.997594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:21.997652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:21.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:22.013296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:22.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:22.013376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.323 [2024-07-24 22:01:22.030160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.323 [2024-07-24 22:01:22.030198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.323 [2024-07-24 22:01:22.030227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.047819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.047859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.047874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.065059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.065095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.065109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.082945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.082999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.083013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.099741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.099776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.099804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.116553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.116591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.116620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.133313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.133354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.133384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.149093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.149134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.165455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.165491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.165520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.182506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.182543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.182572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.198253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.198291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.198320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.215083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.215121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.215150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.231311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.231347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.231375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.246991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.247028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.247057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.263707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.263742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.263771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.279777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.279813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.279841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.583 [2024-07-24 22:01:22.295741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.583 [2024-07-24 22:01:22.295793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.583 [2024-07-24 22:01:22.295823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.311913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.311947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.311976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.327149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.327183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.327211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.342981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.343034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.343064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.360929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.360967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.360981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.377660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.377723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.377754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.392869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.392906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.392919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.408053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.423379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.423414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.843 [2024-07-24 22:01:22.423442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.843 [2024-07-24 22:01:22.439605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.843 [2024-07-24 22:01:22.439660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.439672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.454786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.454820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.454848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.470103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.470139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.470169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.485985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.486022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.486052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.502362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.502397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.502426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.518163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.518198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.518228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.533877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.533909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.533938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.844 [2024-07-24 22:01:22.548899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:16.844 [2024-07-24 22:01:22.548936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.844 [2024-07-24 22:01:22.548949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.565185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.565220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.565249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.580423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.580472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.580501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.595645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.595707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.612014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.612051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.612081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.628240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.628274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.628302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.649986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.650020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.650049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.665406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.103 [2024-07-24 22:01:22.665439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.103 [2024-07-24 22:01:22.665467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.103 [2024-07-24 22:01:22.680638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.680701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.680729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.696926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.696963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.696977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.713297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.713332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.729236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.729272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.744437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.744471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.744500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.759696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.759727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.759739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.775477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.775513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.775542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.792554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.792593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.104 [2024-07-24 22:01:22.809053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.104 [2024-07-24 22:01:22.809091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.104 [2024-07-24 22:01:22.809105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.825465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.825499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.825527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.841410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.841445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.841473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.857100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.857136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.857169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.873869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.873903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.873932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.889807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.889843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.889872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.905442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.905478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.905506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.363 [2024-07-24 22:01:22.921118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.363 [2024-07-24 22:01:22.921173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.363 [2024-07-24 22:01:22.921203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:22.936772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:22.936815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:22.936861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:22.952867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:22.952915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:22.952928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:22.968937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:22.968986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:22.969001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:22.984869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:22.984908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:22.984922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:23.000944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:23.000981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:23.000995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:23.016230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:23.016264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:23.016292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:23.031540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:23.031574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:23.031602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:23.048066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:23.048134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:23.048156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.364 [2024-07-24 22:01:23.065881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.364 [2024-07-24 22:01:23.065922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.364 [2024-07-24 22:01:23.065953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.082944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.082980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.083008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.098306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.098343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.098373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.113695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.113728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.113757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.129902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.129937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.129965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.146031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.146085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.146114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.161202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.161237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.161281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.176373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.176442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.191707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.191739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.207062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.207124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.223379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.223413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.623 [2024-07-24 22:01:23.223442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.623 [2024-07-24 22:01:23.238576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.623 [2024-07-24 22:01:23.238638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.254645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.254705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.254735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.270702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.270735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.270762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.285932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.285966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.285979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.304145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.304187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.304217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.320556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.320597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.624 [2024-07-24 22:01:23.336371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.624 [2024-07-24 22:01:23.336408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.624 [2024-07-24 22:01:23.336422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.352942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.352981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.352995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.370134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.370173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.370187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.387647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.387692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.387721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.405087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.405128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.405142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.421447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.421482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.421511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.437264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.437317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.453004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.453060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.469169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.469221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.469250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.485774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.485810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.485839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.502280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.502316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.518353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.518387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.518417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.534003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.534042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.534055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.550809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.550845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.882 [2024-07-24 22:01:23.550875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.882 [2024-07-24 22:01:23.566935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.882 [2024-07-24 22:01:23.566970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.883 [2024-07-24 22:01:23.566999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.883 [2024-07-24 22:01:23.582445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.883 [2024-07-24 22:01:23.582495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.883 [2024-07-24 22:01:23.582524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.883 [2024-07-24 22:01:23.598524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ce5fa0) 00:21:17.883 [2024-07-24 22:01:23.598558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.883 [2024-07-24 22:01:23.598586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.141 00:21:18.141 Latency(us) 00:21:18.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.141 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:18.141 nvme0n1 : 2.01 15697.23 61.32 0.00 0.00 8148.48 7238.75 29550.78 00:21:18.141 =================================================================================================================== 00:21:18.141 Total : 15697.23 61.32 0.00 0.00 8148.48 7238.75 29550.78 00:21:18.141 0 00:21:18.141 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:18.141 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:18.141 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:18.141 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:18.141 | .driver_specific 00:21:18.141 | .nvme_error 00:21:18.141 | .status_code 00:21:18.141 | .command_transient_transport_error' 00:21:18.399 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95168 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95168 ']' 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95168 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95168 00:21:18.400 killing process with pid 95168 00:21:18.400 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.400 00:21:18.400 Latency(us) 00:21:18.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.400 =================================================================================================================== 00:21:18.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95168' 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95168 00:21:18.400 22:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95168 00:21:18.658 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:18.658 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:18.658 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95229 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95229 /var/tmp/bperf.sock 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95229 ']' 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.659 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.659 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.659 Zero copy mechanism will not be used. 00:21:18.659 [2024-07-24 22:01:24.180737] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:18.659 [2024-07-24 22:01:24.180878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95229 ] 00:21:18.659 [2024-07-24 22:01:24.313864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.918 [2024-07-24 22:01:24.378116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.918 [2024-07-24 22:01:24.434336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:18.918 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.918 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:18.918 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.918 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:19.176 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:19.176 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.176 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.176 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.176 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.177 22:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.435 nvme0n1 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:19.435 22:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.696 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:19.696 Zero copy mechanism will not be used. 00:21:19.696 Running I/O for 2 seconds... 00:21:19.696 [2024-07-24 22:01:25.171746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.171824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.171841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.176087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.176124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.176153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.180131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.180166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.180195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.184119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.184155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.184184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.188598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.188676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.193040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.193078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.193092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.197563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.197598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.197638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.202128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.202167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.202182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.206440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.206479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.210870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.210908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.210938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.215294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.215332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.215371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.219773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.219807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.219836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.223988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.224023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.224052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.228375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.228413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.228458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.696 [2024-07-24 22:01:25.232689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.696 [2024-07-24 22:01:25.232752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.696 [2024-07-24 22:01:25.232782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.236780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.236836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.236851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.240876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.240914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.240928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.245089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.245127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.245141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.249716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.249755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.249769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.253869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.253906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.253937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.258233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.258271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.258301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.262755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.262790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.266970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.267034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.271292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.271329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.271359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.275454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.275491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.275520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.279666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.279735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.279749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.283897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.283932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.283961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.288131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.288196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.288225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.292506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.292542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.292570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.296614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.296671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.296685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.300761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.300796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.300849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.304910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.304948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.304961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.309173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.309209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.309238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.313373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.313408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.313437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.317597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.317659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.317689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.321765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.321799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.321828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.325896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.325931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.325959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.330118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.330154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.330183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.334508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.334557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.334586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.338674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.338708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.338736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.342753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.342787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.697 [2024-07-24 22:01:25.342815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.697 [2024-07-24 22:01:25.346861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.697 [2024-07-24 22:01:25.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.346924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.350902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.350936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.350965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.354991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.355026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.355055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.359072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.359107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.359135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.363192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.363228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.363257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.367309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.367372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.371493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.371528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.371558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.375599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.375643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.375671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.379659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.379694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.379723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.383766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.383802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.387922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.387957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.387986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.391842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.391876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.391904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.395847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.395909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.399801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.399865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.403912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.403949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.403978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.698 [2024-07-24 22:01:25.408422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.698 [2024-07-24 22:01:25.408472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.698 [2024-07-24 22:01:25.408501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.412937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.412975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.412989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.417313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.417352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.417366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.421810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.421846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.421875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.426113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.426152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.426166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.430547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.430582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.430610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.434899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.434933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.434961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.439239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.439277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.439306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.443626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.443686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.443715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.447913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.447948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.447976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.452291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.452326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.452370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.456422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.456459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.456488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.460429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.460464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.460493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.464377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.464412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.464455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.468267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.468305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.468334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.472285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.472321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.472350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.476301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.476337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.476365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.480285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.480323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.480351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.484299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.484364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.488295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.488330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.488360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.492294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.492332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.492361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.496251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.496287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.496316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.500355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.500391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.500420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.504370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.504408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.504437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.508365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.508417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.512526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.959 [2024-07-24 22:01:25.512561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.959 [2024-07-24 22:01:25.512590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.959 [2024-07-24 22:01:25.516738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.516803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.520666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.520699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.520728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.524657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.524691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.524719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.528589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.528651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.532620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.532653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.532680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.536469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.536503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.540855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.540907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.545414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.545449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.545478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.549920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.549957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.550000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.554311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.554347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.554376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.558450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.558487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.558516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.562515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.562550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.562578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.566495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.566529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.566557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.570546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.570583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.570612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.574594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.574672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.574684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.578644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.578674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.578686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.582925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.582961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.582975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.587187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.587223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.587253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.591573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.591634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.595840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.595887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.595916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.600043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.600109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.600139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.604240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.604276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.604305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.608369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.608420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.608449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.612571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.612646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.616592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.616634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.616663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.620559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.620594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.620635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.624582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.960 [2024-07-24 22:01:25.624642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.960 [2024-07-24 22:01:25.624672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.960 [2024-07-24 22:01:25.628664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.628698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.628727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.632788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.632889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.637456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.637491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.641802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.641836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.641864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.645954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.645988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.646016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.650347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.650383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.654604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.654695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.658851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.658884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.658912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.663091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.663128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.663158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.667372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.667407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.667436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.961 [2024-07-24 22:01:25.671719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:19.961 [2024-07-24 22:01:25.671753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.961 [2024-07-24 22:01:25.671782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.221 [2024-07-24 22:01:25.676201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.221 [2024-07-24 22:01:25.676236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.221 [2024-07-24 22:01:25.676266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.221 [2024-07-24 22:01:25.680849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.221 [2024-07-24 22:01:25.680885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.680899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.685251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.685288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.685301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.689650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.689709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.689739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.694052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.694122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.694151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.698303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.698340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.698370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.702579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.702640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.702670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.707017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.707054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.707098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.711627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.711690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.711705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.715724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.715758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.715786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.719865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.719901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.719928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.723843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.723878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.723907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.728060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.728111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.728140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.732798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.732865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.732880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.737405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.737457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.737485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.741789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.741827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.741840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.746105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.746142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.746171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.750581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.750659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.750691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.755012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.755048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.759538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.759574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.759602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.763908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.763946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.763975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.768132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.768168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.768197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.772337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.772374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.772403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.776557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.776593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.776650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.780909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.780947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.780961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.785432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.785467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.785496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.789817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.789851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.789879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.222 [2024-07-24 22:01:25.794090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.222 [2024-07-24 22:01:25.794143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.222 [2024-07-24 22:01:25.794172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.798542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.798607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.802946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.802992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.803004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.807469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.807506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.807535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.811916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.811954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.811985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.816191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.820371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.820409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.820423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.824563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.824601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.824629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.828885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.828924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.828941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.833219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.833257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.833272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.837464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.837501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.837531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.841769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.845996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.846047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.846091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.850418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.850455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.854649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.854716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.854731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.858842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.858880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.858895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.863105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.863145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.863159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.867346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.867383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.867412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.871607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.871670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.871684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.875724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.875759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.875787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.880378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.880415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.880444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.884606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.884667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.884693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.888736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.888774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.893198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.893235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.893263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.897331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.897366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.897395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.901800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.901855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.901884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.223 [2024-07-24 22:01:25.906164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.223 [2024-07-24 22:01:25.906202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.223 [2024-07-24 22:01:25.906232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.910556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.910593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.910635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.915063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.915121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.915136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.919446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.919512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.919542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.923534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.923570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.923599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.927584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.927645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.927660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.931742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.931806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.224 [2024-07-24 22:01:25.935921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.224 [2024-07-24 22:01:25.935958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.224 [2024-07-24 22:01:25.935987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.940207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.940243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.940272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.944655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.944717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.944748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.948793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.948855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.948869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.953089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.953128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.953142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.957264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.957328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.961410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.484 [2024-07-24 22:01:25.961444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.484 [2024-07-24 22:01:25.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.484 [2024-07-24 22:01:25.965838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.965875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.965904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.970299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.970338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.970353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.974440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.974474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.974502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.978560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.978596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.978650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.982748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.982783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.982812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.986761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.986795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.986823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.990893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.990928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.990957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.994950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.994984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:25.999059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:25.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:25.999154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.003134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.003168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.003197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.007150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.007214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.011282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.011317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.011346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.015484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.015520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.015549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.019417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.019495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.023365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.023400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.023428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.027353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.027388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.027417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.031367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.031402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.031431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.035552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.035587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.035616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.039625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.039666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.043594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.043667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.043681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.047536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.047570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.051528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.051562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.051591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.055500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.055533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.055562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.059604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.059664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.059695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.063868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.063902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.068182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.068216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.068244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.485 [2024-07-24 22:01:26.072424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.485 [2024-07-24 22:01:26.072459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.485 [2024-07-24 22:01:26.072487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.076405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.076439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.076467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.080770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.085015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.085054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.085068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.089238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.089273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.089302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.093340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.093375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.093403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.097417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.097452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.097480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.101403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.101437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.105446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.105481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.105509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.109692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.109726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.109754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.115202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.115262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.115292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.119708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.119747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.119778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.124354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.124389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.129045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.129087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.129101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.133447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.133484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.138012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.138062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.138091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.142444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.142480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.147172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.147212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.147226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.151952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.152019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.152053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.156481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.156516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.156544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.160666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.160708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.160736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.164567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.164602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.164659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.168526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.168560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.168589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.172476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.172511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.172539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.176512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.176547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.176576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.180601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.180665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.180694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.184492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.184527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.486 [2024-07-24 22:01:26.188689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.486 [2024-07-24 22:01:26.188733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.486 [2024-07-24 22:01:26.188746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.487 [2024-07-24 22:01:26.192762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.487 [2024-07-24 22:01:26.192796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.487 [2024-07-24 22:01:26.192817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.487 [2024-07-24 22:01:26.197140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.487 [2024-07-24 22:01:26.197206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.487 [2024-07-24 22:01:26.197235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.201523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.201558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.201587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.205745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.205795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.205824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.209902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.209964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.214235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.214272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.214301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.218342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.218377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.218406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.222647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.222706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.222736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.227189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.227226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.227256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.231713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.231748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.231776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.235896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.235932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.235961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.240105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.240142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.240171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.244299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.244335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.248493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.248528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.747 [2024-07-24 22:01:26.248557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.747 [2024-07-24 22:01:26.252541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.747 [2024-07-24 22:01:26.252576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.252605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.256529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.256564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.260573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.260647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.260678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.264775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.264859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.264874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.269005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.269042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.269057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.273344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.273409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.273438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.277607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.277667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.281827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.281864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.281893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.285945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.285980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.286009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.290025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.290075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.290104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.294223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.294260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.294289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.298257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.298291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.298320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.302237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.302272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.302301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.306586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.306645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.306674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.310944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.310978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.311006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.315147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.315183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.319390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.319469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.319497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.323865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.323899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.323927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.328187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.328239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.328251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.332429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.332478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.332508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.336455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.336488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.336517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.340347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.340381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.340409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.344418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.344455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.344484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.348651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.348711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.348726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.352678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.352712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.352741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.356908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.356946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.356960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.360960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.748 [2024-07-24 22:01:26.360998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.748 [2024-07-24 22:01:26.361012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.748 [2024-07-24 22:01:26.365095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.365133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.365161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.369215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.369279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.373301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.373335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.373364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.377372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.377408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.377438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.381336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.381371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.385699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.385762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.390020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.390055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.390101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.394328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.394364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.394393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.398703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.398736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.398764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.402900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.402934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.402962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.407487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.407521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.407549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.411857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.411890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.411919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.415994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.420413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.420451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.420494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.425000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.425038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.425052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.429531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.429584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.434030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.434083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.434098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.438606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.438666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.443549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.443613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.447871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.447908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.447921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.452054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.452122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.452136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.456403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.456485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.456514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.749 [2024-07-24 22:01:26.461032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:20.749 [2024-07-24 22:01:26.461072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.749 [2024-07-24 22:01:26.461086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.465507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.465542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.465571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.469966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.470029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.474160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.474196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.474224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.478217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.478253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.478281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.482287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.482338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.482367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.486558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.486595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.486638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.491007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.491045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.491059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.495242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.495277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.495306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.499590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.499655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.499670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.503895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.508099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.508136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.508166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.512437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.512473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.512502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.516718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.516753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.520896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.520947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.525082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.525120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.525134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.529399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.529450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.533646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.533690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.533719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.537816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.537851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.537880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.541872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.541906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.010 [2024-07-24 22:01:26.541935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.010 [2024-07-24 22:01:26.546062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.010 [2024-07-24 22:01:26.546096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.546125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.550505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.550539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.550568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.554712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.554745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.554773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.558780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.558843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.562855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.562890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.562919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.567080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.567118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.567132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.571290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.571325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.571353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.575438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.575502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.579479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.579514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.579542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.583820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.583854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.583882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.588283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.588319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.588349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.592469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.592504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.592532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.596580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.596637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.596651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.600737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.600771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.600800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.604926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.604963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.604977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.608948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.608985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.608999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.613006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.613044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.613058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.617250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.617299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.617328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.621439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.621474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.621502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.625752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.625786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.625814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.629845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.629880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.629908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.634043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.634077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.634107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.638153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.638188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.638217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.642238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.642273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.646467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.646517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.646545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.650651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.650682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.654681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.011 [2024-07-24 22:01:26.654714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.011 [2024-07-24 22:01:26.654743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.011 [2024-07-24 22:01:26.658605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.658665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.658695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.662716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.662765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.662794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.667023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.667057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.667086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.671401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.671436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.671464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.675474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.675508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.675537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.679547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.679582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.679610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.683751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.683784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.683813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.687695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.687728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.687756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.691645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.691678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.691707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.695589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.695649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.695679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.699619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.699651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.699679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.703546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.703580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.703609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.707460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.707494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.707521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.711429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.711463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.711491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.715503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.715538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.715566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.719549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.719586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.719615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.012 [2024-07-24 22:01:26.723831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.012 [2024-07-24 22:01:26.723866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.012 [2024-07-24 22:01:26.723895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.727906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.727941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.727969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.731998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.732060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.732090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.736163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.736197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.736225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.740122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.740156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.740185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.744324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.744359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.744386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.748662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.748733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.748763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.752776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.752832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.752861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.756769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.756803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.756884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.760700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.760733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.760760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.764720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.764753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.768699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.768732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.768760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.772600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.772661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.772690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.776470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.776505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.776533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.780373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.780435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.784306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.784340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.784368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.788597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.788660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.788691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.792882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.792920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.792934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.797207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.797244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.797289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.801602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.801661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.801690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.805802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.805836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.805863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.809888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.809949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.813870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.813902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.813930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.273 [2024-07-24 22:01:26.817896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.273 [2024-07-24 22:01:26.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.273 [2024-07-24 22:01:26.817957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.821886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.821919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.825771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.825803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.825831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.829695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.829727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.833684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.833745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.833773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.837750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.837785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.837814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.842112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.842150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.842163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.846713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.846778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.851147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.851183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.851211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.855321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.855357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.859489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.859526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.859554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.863535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.863570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.863598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.867535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.867569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.867597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.871663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.871727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.875718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.875749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.875762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.879745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.879782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.879795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.883773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.883808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.883821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.887730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.887767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.887780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.891669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.891700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.891727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.895871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.895905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.895917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.899944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.899979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.899991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.904150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.904185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.904214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.908165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.908200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.908228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.912145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.912177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.912205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.916208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.274 [2024-07-24 22:01:26.916242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.274 [2024-07-24 22:01:26.916270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.274 [2024-07-24 22:01:26.920149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.920183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.920211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.924216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.924267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.924295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.928522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.928558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.928587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.932386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.932419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.932447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.936259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.936293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.936321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.940176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.940221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.940250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.944345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.944397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.944426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.948508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.948541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.948569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.952699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.952731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.952759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.956769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.956801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.960875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.964872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.964907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.964920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.968737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.968799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.972764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.972797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.972879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.977837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.977891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.977913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.275 [2024-07-24 22:01:26.983457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.275 [2024-07-24 22:01:26.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.275 [2024-07-24 22:01:26.983543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.535 [2024-07-24 22:01:26.988980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.535 [2024-07-24 22:01:26.989024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.535 [2024-07-24 22:01:26.989038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:26.993694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:26.993750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:26.993772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:26.998480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:26.998519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:26.998548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.002959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.003011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.003039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.007452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.007504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.007533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.011711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.011748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.011776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.015757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.015793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.015821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.019793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.019858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.023910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.023975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.023989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.027885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.027919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.027947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.031853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.031888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.031916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.035858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.035893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.035921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.039937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.039971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.039999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.044225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.044263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.044292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.048479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.048514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.048544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.052791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.052879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.057297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.057346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.057374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.061483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.061519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.061548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.065447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.065482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.065509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.069826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.069863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.069892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.073913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.073949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.073977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.077899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.081879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.081914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.081943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.086148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.086183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.086212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.090423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.090461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.090475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.094790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.094826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.094854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.099005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.099040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.536 [2024-07-24 22:01:27.099084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.536 [2024-07-24 22:01:27.103447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.536 [2024-07-24 22:01:27.103482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.103528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.107880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.107918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.107948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.112039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.112078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.112092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.116145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.116181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.116225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.120135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.120172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.120200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.124135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.124171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.124200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.128042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.128077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.128105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.132257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.132296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.136595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.136657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.136673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.141874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.141930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.146635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.146709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.151434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.151488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.151517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.155808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.155847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.155878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.537 [2024-07-24 22:01:27.160296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b90cd0) 00:21:21.537 [2024-07-24 22:01:27.160334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.537 [2024-07-24 22:01:27.160363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.537 00:21:21.537 Latency(us) 00:21:21.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.537 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:21.537 nvme0n1 : 2.00 7317.76 914.72 0.00 0.00 2183.20 1720.32 8936.73 00:21:21.537 =================================================================================================================== 00:21:21.537 Total : 7317.76 914.72 0.00 0.00 2183.20 1720.32 8936.73 00:21:21.537 0 00:21:21.537 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:21.537 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:21.537 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:21.537 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:21.537 | .driver_specific 00:21:21.537 | .nvme_error 00:21:21.537 | .status_code 00:21:21.537 | .command_transient_transport_error' 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 472 > 0 )) 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95229 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95229 ']' 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95229 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95229 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:21.797 killing process with pid 95229 00:21:21.797 Received shutdown signal, test time was about 2.000000 seconds 00:21:21.797 00:21:21.797 Latency(us) 00:21:21.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.797 =================================================================================================================== 00:21:21.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95229' 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95229 00:21:21.797 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95229 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95277 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95277 /var/tmp/bperf.sock 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95277 ']' 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.056 22:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.056 [2024-07-24 22:01:27.749238] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:22.056 [2024-07-24 22:01:27.749639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95277 ] 00:21:22.315 [2024-07-24 22:01:27.885356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.315 [2024-07-24 22:01:27.958181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.315 [2024-07-24 22:01:28.012165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.251 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:23.510 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.510 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.510 22:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.769 nvme0n1 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:23.769 22:01:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:23.769 Running I/O for 2 seconds... 00:21:23.769 [2024-07-24 22:01:29.385240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:23.769 [2024-07-24 22:01:29.387974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.769 [2024-07-24 22:01:29.388006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.769 [2024-07-24 22:01:29.400358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190feb58 00:21:23.770 [2024-07-24 22:01:29.402833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.402867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:23.770 [2024-07-24 22:01:29.414975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fe2e8 00:21:23.770 [2024-07-24 22:01:29.417604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.417663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:23.770 [2024-07-24 22:01:29.430039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fda78 00:21:23.770 [2024-07-24 22:01:29.432365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.432397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:23.770 [2024-07-24 22:01:29.446048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fd208 00:21:23.770 [2024-07-24 22:01:29.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.448506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:23.770 [2024-07-24 22:01:29.462122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fc998 00:21:23.770 [2024-07-24 22:01:29.464687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.464733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:23.770 [2024-07-24 22:01:29.478087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fc128 00:21:23.770 [2024-07-24 22:01:29.480421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:23.770 [2024-07-24 22:01:29.480454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.493990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fb8b8 00:21:24.029 [2024-07-24 22:01:29.496439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.496472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.510423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fb048 00:21:24.029 [2024-07-24 22:01:29.512940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.512976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.526597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fa7d8 00:21:24.029 [2024-07-24 22:01:29.529003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.529039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.542463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f9f68 00:21:24.029 [2024-07-24 22:01:29.544858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.544895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.557903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f96f8 00:21:24.029 [2024-07-24 22:01:29.560011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.560044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.572577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f8e88 00:21:24.029 [2024-07-24 22:01:29.574907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.574941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.587630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f8618 00:21:24.029 [2024-07-24 22:01:29.589919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.589951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.603176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f7da8 00:21:24.029 [2024-07-24 22:01:29.605533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.605566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.618820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f7538 00:21:24.029 [2024-07-24 22:01:29.620997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.621032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.633844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f6cc8 00:21:24.029 [2024-07-24 22:01:29.635938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.635971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.648573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f6458 00:21:24.029 [2024-07-24 22:01:29.650860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.650895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.663357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f5be8 00:21:24.029 [2024-07-24 22:01:29.665566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.665598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.678472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f5378 00:21:24.029 [2024-07-24 22:01:29.680838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.680874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.693401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f4b08 00:21:24.029 [2024-07-24 22:01:29.695773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.695806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.708648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f4298 00:21:24.029 [2024-07-24 22:01:29.710699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.710729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.723070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f3a28 00:21:24.029 [2024-07-24 22:01:29.725120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:24.029 [2024-07-24 22:01:29.737588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f31b8 00:21:24.029 [2024-07-24 22:01:29.739552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.029 [2024-07-24 22:01:29.739583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.752972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f2948 00:21:24.288 [2024-07-24 22:01:29.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.767327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f20d8 00:21:24.288 [2024-07-24 22:01:29.769377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.769423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.782909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f1868 00:21:24.288 [2024-07-24 22:01:29.784784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.784837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.797703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f0ff8 00:21:24.288 [2024-07-24 22:01:29.799710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.799743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.812991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f0788 00:21:24.288 [2024-07-24 22:01:29.815089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.815121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.828403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eff18 00:21:24.288 [2024-07-24 22:01:29.830424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.843142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ef6a8 00:21:24.288 [2024-07-24 22:01:29.845041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.845074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.857632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eee38 00:21:24.288 [2024-07-24 22:01:29.859479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.859510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.872102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ee5c8 00:21:24.288 [2024-07-24 22:01:29.873931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.873962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.886604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190edd58 00:21:24.288 [2024-07-24 22:01:29.888446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.888479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.900921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ed4e8 00:21:24.288 [2024-07-24 22:01:29.902813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.288 [2024-07-24 22:01:29.902843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:24.288 [2024-07-24 22:01:29.915278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ecc78 00:21:24.289 [2024-07-24 22:01:29.917104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.917151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:29.929760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ec408 00:21:24.289 [2024-07-24 22:01:29.931521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.931552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:29.944935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ebb98 00:21:24.289 [2024-07-24 22:01:29.946732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.946767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:29.959974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eb328 00:21:24.289 [2024-07-24 22:01:29.961794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.961827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:29.974639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eaab8 00:21:24.289 [2024-07-24 22:01:29.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.976392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:29.989106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ea248 00:21:24.289 [2024-07-24 22:01:29.990848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.289 [2024-07-24 22:01:29.990882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:24.289 [2024-07-24 22:01:30.003946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e99d8 00:21:24.547 [2024-07-24 22:01:30.005781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.005814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.018847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e9168 00:21:24.548 [2024-07-24 22:01:30.020449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.020480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.032975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e88f8 00:21:24.548 [2024-07-24 22:01:30.034567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.034599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.047675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e8088 00:21:24.548 [2024-07-24 22:01:30.049439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.062947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e7818 00:21:24.548 [2024-07-24 22:01:30.064901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.064940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.078570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e6fa8 00:21:24.548 [2024-07-24 22:01:30.080179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.080211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.092935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e6738 00:21:24.548 [2024-07-24 22:01:30.094474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.094506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.107560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e5ec8 00:21:24.548 [2024-07-24 22:01:30.109347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.109378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.122169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e5658 00:21:24.548 [2024-07-24 22:01:30.123822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.124028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.137464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e4de8 00:21:24.548 [2024-07-24 22:01:30.139259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.154494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e4578 00:21:24.548 [2024-07-24 22:01:30.156197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.156380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.171346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e3d08 00:21:24.548 [2024-07-24 22:01:30.173116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.173287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.187246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e3498 00:21:24.548 [2024-07-24 22:01:30.188898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.188930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.204195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e2c28 00:21:24.548 [2024-07-24 22:01:30.205745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.205781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.221203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e23b8 00:21:24.548 [2024-07-24 22:01:30.222721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.222757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.237286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e1b48 00:21:24.548 [2024-07-24 22:01:30.238805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.238839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:24.548 [2024-07-24 22:01:30.252747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e12d8 00:21:24.548 [2024-07-24 22:01:30.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.548 [2024-07-24 22:01:30.254228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.268749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e0a68 00:21:24.808 [2024-07-24 22:01:30.270216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.284383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e01f8 00:21:24.808 [2024-07-24 22:01:30.285818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.285852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.300517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190df988 00:21:24.808 [2024-07-24 22:01:30.301921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.301954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.316254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190df118 00:21:24.808 [2024-07-24 22:01:30.317624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.317683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.331480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190de8a8 00:21:24.808 [2024-07-24 22:01:30.332901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.332937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.346746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190de038 00:21:24.808 [2024-07-24 22:01:30.348038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.348088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.367884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190de038 00:21:24.808 [2024-07-24 22:01:30.370352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.370384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.382855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190de8a8 00:21:24.808 [2024-07-24 22:01:30.385334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.385367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.397818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190df118 00:21:24.808 [2024-07-24 22:01:30.400380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.400415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.413438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190df988 00:21:24.808 [2024-07-24 22:01:30.415989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.416036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.428884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e01f8 00:21:24.808 [2024-07-24 22:01:30.431248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.431280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.443655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e0a68 00:21:24.808 [2024-07-24 22:01:30.446070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.446102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.459082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e12d8 00:21:24.808 [2024-07-24 22:01:30.461492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.461524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.474333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e1b48 00:21:24.808 [2024-07-24 22:01:30.476854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.489306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e23b8 00:21:24.808 [2024-07-24 22:01:30.491524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.503821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e2c28 00:21:24.808 [2024-07-24 22:01:30.506108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.808 [2024-07-24 22:01:30.506140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:24.808 [2024-07-24 22:01:30.518638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e3498 00:21:24.809 [2024-07-24 22:01:30.520984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.809 [2024-07-24 22:01:30.521019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:25.067 [2024-07-24 22:01:30.535559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e3d08 00:21:25.067 [2024-07-24 22:01:30.538193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.067 [2024-07-24 22:01:30.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:25.067 [2024-07-24 22:01:30.551772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e4578 00:21:25.067 [2024-07-24 22:01:30.554123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.067 [2024-07-24 22:01:30.554157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.566777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e4de8 00:21:25.068 [2024-07-24 22:01:30.568970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.569005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.581261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e5658 00:21:25.068 [2024-07-24 22:01:30.583384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.583416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.595831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e5ec8 00:21:25.068 [2024-07-24 22:01:30.598011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.598043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.610826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e6738 00:21:25.068 [2024-07-24 22:01:30.612990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.625188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e6fa8 00:21:25.068 [2024-07-24 22:01:30.627245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.627276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.639513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e7818 00:21:25.068 [2024-07-24 22:01:30.641644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.641684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.654722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e8088 00:21:25.068 [2024-07-24 22:01:30.657086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.657122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.671561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e88f8 00:21:25.068 [2024-07-24 22:01:30.673818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.673853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.687225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e9168 00:21:25.068 [2024-07-24 22:01:30.689368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.689402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.702289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190e99d8 00:21:25.068 [2024-07-24 22:01:30.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.717989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ea248 00:21:25.068 [2024-07-24 22:01:30.720205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.720237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.733344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eaab8 00:21:25.068 [2024-07-24 22:01:30.735583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.735639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.748699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eb328 00:21:25.068 [2024-07-24 22:01:30.750693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.750731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.763519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ebb98 00:21:25.068 [2024-07-24 22:01:30.765585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.765639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:25.068 [2024-07-24 22:01:30.778548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ec408 00:21:25.068 [2024-07-24 22:01:30.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.068 [2024-07-24 22:01:30.780832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:25.339 [2024-07-24 22:01:30.794836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ecc78 00:21:25.339 [2024-07-24 22:01:30.796931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.339 [2024-07-24 22:01:30.796966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:25.339 [2024-07-24 22:01:30.810411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ed4e8 00:21:25.339 [2024-07-24 22:01:30.812392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.812438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.825851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190edd58 00:21:25.340 [2024-07-24 22:01:30.827731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.827763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.841122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ee5c8 00:21:25.340 [2024-07-24 22:01:30.843148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.843180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.856109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eee38 00:21:25.340 [2024-07-24 22:01:30.857962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.858009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.871172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190ef6a8 00:21:25.340 [2024-07-24 22:01:30.873056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.873090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.886399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190eff18 00:21:25.340 [2024-07-24 22:01:30.888306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.888337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.900869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f0788 00:21:25.340 [2024-07-24 22:01:30.902614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.902670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.915386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f0ff8 00:21:25.340 [2024-07-24 22:01:30.917339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.917387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.930640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f1868 00:21:25.340 [2024-07-24 22:01:30.932354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.932385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.944963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f20d8 00:21:25.340 [2024-07-24 22:01:30.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.946902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.959394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f2948 00:21:25.340 [2024-07-24 22:01:30.961244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.961297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.974595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f31b8 00:21:25.340 [2024-07-24 22:01:30.976292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.976323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:30.988917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f3a28 00:21:25.340 [2024-07-24 22:01:30.990745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:30.990776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:31.007168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f4298 00:21:25.340 [2024-07-24 22:01:31.009351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:31.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:31.023053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f4b08 00:21:25.340 [2024-07-24 22:01:31.024789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:31.024830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:25.340 [2024-07-24 22:01:31.038933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f5378 00:21:25.340 [2024-07-24 22:01:31.040764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.340 [2024-07-24 22:01:31.040798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.056782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f5be8 00:21:25.611 [2024-07-24 22:01:31.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.058400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.074628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f6458 00:21:25.611 [2024-07-24 22:01:31.076284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.089792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f6cc8 00:21:25.611 [2024-07-24 22:01:31.091408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.091456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.104679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f7538 00:21:25.611 [2024-07-24 22:01:31.106237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.106271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.119551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f7da8 00:21:25.611 [2024-07-24 22:01:31.121217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.121252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.134584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f8618 00:21:25.611 [2024-07-24 22:01:31.136122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.136154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.149370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f8e88 00:21:25.611 [2024-07-24 22:01:31.150979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.151011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.164372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f96f8 00:21:25.611 [2024-07-24 22:01:31.165958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.166019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.180554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190f9f68 00:21:25.611 [2024-07-24 22:01:31.182151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.182182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.195064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fa7d8 00:21:25.611 [2024-07-24 22:01:31.196509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.209800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fb048 00:21:25.611 [2024-07-24 22:01:31.211216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.211247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.225179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fb8b8 00:21:25.611 [2024-07-24 22:01:31.226759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.226794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.241923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fc128 00:21:25.611 [2024-07-24 22:01:31.243363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.243394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.258318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fc998 00:21:25.611 [2024-07-24 22:01:31.259882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.259916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.274835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fd208 00:21:25.611 [2024-07-24 22:01:31.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.276292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.289944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fda78 00:21:25.611 [2024-07-24 22:01:31.291224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.291256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.304650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fe2e8 00:21:25.611 [2024-07-24 22:01:31.306022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.306053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:25.611 [2024-07-24 22:01:31.319313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190feb58 00:21:25.611 [2024-07-24 22:01:31.320624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.611 [2024-07-24 22:01:31.320682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:25.870 [2024-07-24 22:01:31.340801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:25.870 [2024-07-24 22:01:31.343271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.870 [2024-07-24 22:01:31.343303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.870 [2024-07-24 22:01:31.356157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190feb58 00:21:25.870 [2024-07-24 22:01:31.358916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.870 [2024-07-24 22:01:31.359098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:25.870 00:21:25.870 Latency(us) 00:21:25.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.870 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:25.870 nvme0n1 : 2.00 16554.55 64.67 0.00 0.00 7725.49 2353.34 29074.15 00:21:25.870 =================================================================================================================== 00:21:25.870 Total : 16554.55 64.67 0.00 0.00 7725.49 2353.34 29074.15 00:21:25.870 0 00:21:25.870 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:25.870 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:25.870 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:25.870 | .driver_specific 00:21:25.870 | .nvme_error 00:21:25.870 | .status_code 00:21:25.870 | .command_transient_transport_error' 00:21:25.870 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 129 > 0 )) 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95277 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95277 ']' 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95277 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95277 00:21:26.128 killing process with pid 95277 00:21:26.128 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.128 00:21:26.128 Latency(us) 00:21:26.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.128 =================================================================================================================== 00:21:26.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95277' 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95277 00:21:26.128 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95277 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95339 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95339 /var/tmp/bperf.sock 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95339 ']' 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:26.385 22:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:26.385 [2024-07-24 22:01:31.961879] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:26.385 [2024-07-24 22:01:31.962169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95339 ] 00:21:26.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:26.385 Zero copy mechanism will not be used. 00:21:26.385 [2024-07-24 22:01:32.100159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.642 [2024-07-24 22:01:32.174361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.642 [2024-07-24 22:01:32.229167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:27.209 22:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:27.209 22:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:27.209 22:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.209 22:01:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.467 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.725 nvme0n1 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:27.725 22:01:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:27.982 Zero copy mechanism will not be used. 00:21:27.982 Running I/O for 2 seconds... 00:21:27.982 [2024-07-24 22:01:33.527774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.528094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.528123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.532974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.533322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.533355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.538217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.538504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.538531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.543418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.543723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.543749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.548445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.548757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.548841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.553734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.554019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.554094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.558785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.559105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.559131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.563844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.564169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.564212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.568951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.569278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.569337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.574189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.574497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.574523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.579711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.982 [2024-07-24 22:01:33.580040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.982 [2024-07-24 22:01:33.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.982 [2024-07-24 22:01:33.585401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.585729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.591037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.591359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.591389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.596517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.596880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.596913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.602092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.602393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.602424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.607834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.608144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.608174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.613204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.613532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.613558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.618494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.618836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.618867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.623762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.624047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.624073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.628790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.629159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.633878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.634205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.634232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.639080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.639396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.639426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.644131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.644434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.649330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.649649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.649712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.654743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.655028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.655085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.659810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.660110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.660170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.664759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.665067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.665109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.669760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.670045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.670102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.674860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.675160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.675221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.679868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.680155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.680181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.685093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.685426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.685452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.690059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.690399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.690437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:27.983 [2024-07-24 22:01:33.695184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:27.983 [2024-07-24 22:01:33.695494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.983 [2024-07-24 22:01:33.695548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.700904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.701229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.701287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.706179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.706491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.706533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.711314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.711613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.711682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.716411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.716707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.721444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.721759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.721785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.726598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.726955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.732035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.732329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.732357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.737502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.737849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.737881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.742927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.743263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.743290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.748530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.748871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.748894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.753913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.754245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.759087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.759375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.759416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.764344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.764704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.764750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.769580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.769958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.774751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.775028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.775086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.779742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.779999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.780023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.784794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.785149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.242 [2024-07-24 22:01:33.785177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.242 [2024-07-24 22:01:33.789830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.242 [2024-07-24 22:01:33.790104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.790160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.794891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.795195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.795251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.800352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.800709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.805868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.806215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.811387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.811749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.811776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.816858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.817154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.817189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.822299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.822621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.822688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.827787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.828110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.828137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.833017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.833310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.833337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.838347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.838689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.843819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.844149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.849824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.850141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.850169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.855214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.855517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.855543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.860750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.861123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.861150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.866370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.866719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.871690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.871988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.872014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.876668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.877000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.881756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.882041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.882082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.886792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.887057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.887097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.891786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.892078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.892130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.896788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.897108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.897135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.901794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.902080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.902106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.907226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.907551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.907602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.912672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.913020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.913059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.918034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.918359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.918404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.923265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.923536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.923562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.928505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.243 [2024-07-24 22:01:33.928869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.243 [2024-07-24 22:01:33.928902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.243 [2024-07-24 22:01:33.933702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.244 [2024-07-24 22:01:33.933984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.244 [2024-07-24 22:01:33.934009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.244 [2024-07-24 22:01:33.938698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.244 [2024-07-24 22:01:33.938982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.244 [2024-07-24 22:01:33.939008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.244 [2024-07-24 22:01:33.943880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.244 [2024-07-24 22:01:33.944191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.244 [2024-07-24 22:01:33.944217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.244 [2024-07-24 22:01:33.949511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.244 [2024-07-24 22:01:33.949871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.244 [2024-07-24 22:01:33.949903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.244 [2024-07-24 22:01:33.954964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.244 [2024-07-24 22:01:33.955295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.244 [2024-07-24 22:01:33.955322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.960636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.960995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.961027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.966264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.966596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.966630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.971388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.971718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.971765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.976534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.976858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.976885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.981694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.981975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.982001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.986737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.987027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.987053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.991655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.991962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:33.997044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:33.997341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:33.997368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.002349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.002680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.002718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.007596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.007878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.007905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.012902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.013224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.013262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.018279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.018635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.023441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.023705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.023761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.028320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.028591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.028673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.033229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.033509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.033534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.038170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.038426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.038466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.043080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.043352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.043376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.048208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.048547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.048571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.053743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.054028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.054085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.058963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.059258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.064152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.064412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.064469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.069354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.069662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.069695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.074410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.074713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.074738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.079467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.504 [2024-07-24 22:01:34.079805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.504 [2024-07-24 22:01:34.079837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.504 [2024-07-24 22:01:34.084558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.084883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.084909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.089532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.089862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.089893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.094793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.095065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.095122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.099794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.100102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.100127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.105650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.105997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.106068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.111327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.111624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.111659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.116884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.117191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.117230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.122624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.123031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.123067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.127669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.127926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.127982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.132450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.132738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.132819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.137462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.137738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.137794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.142352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.142622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.142677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.147273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.147598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.147647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.152250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.152578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.152622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.157750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.158049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.158110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.163082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.163397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.163440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.168325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.168643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.168698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.173607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.173956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.174010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.178756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.179066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.183798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.184092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.188851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.189148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.189175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.193849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.194143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.194168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.198855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.199138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.199164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.203861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.204182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.208875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.209183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.213882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.505 [2024-07-24 22:01:34.214158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.505 [2024-07-24 22:01:34.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.505 [2024-07-24 22:01:34.219420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.219796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.219823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.224731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.225097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.225124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.229845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.230137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.230162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.234792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.235068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.235092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.239749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.240063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.240110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.244869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.245169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.245194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.249887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.250172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.250212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.254833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.255108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.255134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.259640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.259919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.259943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.264496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.264848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.264880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.269490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.269808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.269833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.274470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.274778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.274802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.279633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.279951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.279993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.284977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.285293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.285320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.290424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.290766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.290798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.295916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.296242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.296268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.301373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.301717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.301763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.306703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.307022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.307060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.312093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.312391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.312417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.317532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.317880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.317911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.322941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.323269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.323294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.328263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.328540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.328565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.333314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.333606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.333640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.338290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.338569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.338594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.343569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.766 [2024-07-24 22:01:34.343931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.766 [2024-07-24 22:01:34.343969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.766 [2024-07-24 22:01:34.348972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.349294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.349320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.354201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.354484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.354509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.359776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.360086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.360130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.365906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.366204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.366236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.371346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.371724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.376930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.377230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.377258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.382543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.382907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.382938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.387716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.388031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.392622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.392941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.392968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.397675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.398015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.398042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.402801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.403093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.403120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.407884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.408177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.408204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.413066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.413382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.413408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.418235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.418511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.418536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.423404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.423716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.423742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.428538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.428874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.433539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.433876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.433908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.439018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.439333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.439361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.444577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.444960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.449926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.450256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.450284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.455306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.455630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.455672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.460402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.460726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.460753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.465458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.465757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.465783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.470494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.470794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.470819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.475599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.475906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.475932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.767 [2024-07-24 22:01:34.481254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:28.767 [2024-07-24 22:01:34.481601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.767 [2024-07-24 22:01:34.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.486871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.487191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.487219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.492229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.492551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.492577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.497623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.497940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.497967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.502800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.503083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.503109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.507780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.508064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.508089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.512843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.513151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.513205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.518026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.518313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.518339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.522997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.523307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.523333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.027 [2024-07-24 22:01:34.528037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.027 [2024-07-24 22:01:34.528350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.027 [2024-07-24 22:01:34.528372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.533283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.533582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.533622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.538670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.539020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.539055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.543991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.544308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.544335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.549259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.549567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.549593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.554823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.555124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.555156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.559750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.560038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.560058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.564621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.564930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.564956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.569685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.569971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.574848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.575175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.580113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.580456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.580482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.585599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.585874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.585900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.590567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.590877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.590902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.595617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.595934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.595996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.601083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.601463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.601500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.606493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.606838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.612060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.612377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.612404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.617496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.617815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.617844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.623159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.623459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.623503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.628775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.629103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.629131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.634215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.634646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.634710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.640161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.640512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.640548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.645509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.645829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.645858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.650832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.651150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.655968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.656288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.656314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.660972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.028 [2024-07-24 22:01:34.661331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.028 [2024-07-24 22:01:34.661372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.028 [2024-07-24 22:01:34.665971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.666246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.666271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.671388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.671699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.671735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.676731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.677080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.677106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.681621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.681934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.681960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.686637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.686921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.686946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.691837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.692138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.692164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.697081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.697393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.697433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.702256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.702568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.702593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.707484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.707770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.707795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.712547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.712906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.712937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.717639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.718015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.718046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.722695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.722973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.722997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.727708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.727993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.732629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.732941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.732968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.029 [2024-07-24 22:01:34.737662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.029 [2024-07-24 22:01:34.738035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.029 [2024-07-24 22:01:34.738067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.743641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.743991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.744018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.749111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.749452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.749479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.754395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.754689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.754716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.759355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.759642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.759667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.764334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.764627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.764661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.289 [2024-07-24 22:01:34.769310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.289 [2024-07-24 22:01:34.769587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.289 [2024-07-24 22:01:34.769623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.774305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.774583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.774619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.779278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.779562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.779587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.784293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.784592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.789741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.790005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.790030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.794841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.795140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.795166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.799964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.800267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.800293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.805064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.805379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.805405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.810146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.810430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.810456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.815215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.815490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.815515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.820101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.820383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.820423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.825037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.825347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.825373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.829975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.830251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.830276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.834890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.835169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.835195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.840211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.840531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.840557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.845625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.845970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.846003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.850802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.851080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.851106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.855929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.856218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.856244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.860920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.861227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.861253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.865967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.866254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.870960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.871254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.871296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.876056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.876345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.881344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.881626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.881692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.886780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.887052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.887094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.891901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.892229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.892272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.897555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.897871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.897912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.902603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.290 [2024-07-24 22:01:34.902873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.290 [2024-07-24 22:01:34.902897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.290 [2024-07-24 22:01:34.907502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.907805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.907834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.912421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.912689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.912713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.917315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.917572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.922192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.922455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.922479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.927037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.927313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.927338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.931941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.932197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.932222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.936793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.937107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.937148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.941709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.941964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.941989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.946525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.946824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.946854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.951614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.951880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.951904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.956413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.956678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.956702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.961352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.961609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.961643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.966257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.966514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.966539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.971255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.971538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.971579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.976169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.976467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.981222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.981521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.986065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.986321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.986346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.990825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.991098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.991122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:34.995718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:34.995972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:34.995997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.291 [2024-07-24 22:01:35.000514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.291 [2024-07-24 22:01:35.000880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.291 [2024-07-24 22:01:35.000911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.006158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.006460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.006515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.011391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.011747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.011803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.016468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.016840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.016870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.021845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.022166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.022209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.027162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.027479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.027537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.552 [2024-07-24 22:01:35.032467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.552 [2024-07-24 22:01:35.032771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.552 [2024-07-24 22:01:35.032799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.037793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.038089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.043270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.043586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.043621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.048605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.048956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.048987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.053980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.054305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.054334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.059316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.059692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.064558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.064909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.064935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.069696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.070017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.070046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.074797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.075082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.079712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.079977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.080001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.084882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.085242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.089973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.090291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.090334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.095067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.095398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.095437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.100162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.100475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.100510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.105270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.105583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.105633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.110281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.110610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.110648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.115258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.115599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.120484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.120786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.120853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.125670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.126038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.130705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.131015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.131058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.135765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.136051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.136093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.141127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.141479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.146835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.147157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.147194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.152378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.152748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.152784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.158271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.158613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.553 [2024-07-24 22:01:35.158666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.553 [2024-07-24 22:01:35.163519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.553 [2024-07-24 22:01:35.163858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.163889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.168605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.168936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.168964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.173701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.173973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.174028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.178997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.179310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.179352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.184161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.184444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.184470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.189379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.189669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.189734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.194548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.194900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.194941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.199782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.200070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.200096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.204779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.205107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.205134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.210111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.210409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.210450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.215235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.215540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.220310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.220593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.220642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.225548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.225905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.225931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.230688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.230951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.230975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.235755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.236030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.236054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.240713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.241016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.241042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.245758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.246037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.246061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.250668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.250947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.250971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.255523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.255852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.255883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.260603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.260933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.260960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.554 [2024-07-24 22:01:35.266015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.554 [2024-07-24 22:01:35.266357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.554 [2024-07-24 22:01:35.266385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.814 [2024-07-24 22:01:35.271430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.271752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.271779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.276781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.277148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.281743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.282030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.282056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.286735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.287015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.287040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.291669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.291958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.291983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.296676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.296972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.301676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.302024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.302055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.306924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.307271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.307297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.312428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.312772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.312799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.317768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.318132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.318169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.323364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.323705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.323753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.329035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.329360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.329385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.334413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.334744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.334771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.339740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.340097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.340122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.344909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.345228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.345255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.349843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.350140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.350166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.354972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.355277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.355303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.359948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.360224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.360249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.365053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.365373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.370185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.370472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.370498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.375135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.375420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.375460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.380172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.380450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.380475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.385244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.385533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.385558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.390202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.815 [2024-07-24 22:01:35.390481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.815 [2024-07-24 22:01:35.390507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.815 [2024-07-24 22:01:35.395212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.395510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.395536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.400721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.401079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.401121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.406441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.406758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.406788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.412092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.412367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.412392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.417781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.418062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.422729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.423006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.423031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.427863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.428156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.428182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.433065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.433380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.433407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.438337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.438653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.438688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.443544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.443838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.443864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.448623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.448983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.449009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.453695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.453980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.454005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.458743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.459029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.459054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.463714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.463997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.464022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.468630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.468962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.468990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.473652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.473986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.474033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.478941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.479282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.484420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.484728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.484774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.489376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.489687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.489723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.494318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.494598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.494631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.499171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.499450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.504066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.504342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.509120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.509455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.509481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.816 [2024-07-24 22:01:35.514126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15a8360) with pdu=0x2000190fef90 00:21:29.816 [2024-07-24 22:01:35.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.816 [2024-07-24 22:01:35.514429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.816 00:21:29.816 Latency(us) 00:21:29.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.816 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:29.816 nvme0n1 : 2.00 5943.70 742.96 0.00 0.00 2686.24 2159.71 9770.82 00:21:29.816 =================================================================================================================== 00:21:29.816 Total : 5943.70 742.96 0.00 0.00 2686.24 2159.71 9770.82 00:21:29.816 0 00:21:30.075 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:30.075 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:30.075 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:30.075 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:30.075 | .driver_specific 00:21:30.075 | .nvme_error 00:21:30.075 | .status_code 00:21:30.075 | .command_transient_transport_error' 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95339 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95339 ']' 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95339 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95339 00:21:30.334 killing process with pid 95339 00:21:30.334 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.334 00:21:30.334 Latency(us) 00:21:30.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.334 =================================================================================================================== 00:21:30.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95339' 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95339 00:21:30.334 22:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95339 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95136 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95136 ']' 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95136 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95136 00:21:30.649 killing process with pid 95136 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95136' 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95136 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95136 00:21:30.649 00:21:30.649 real 0m17.444s 00:21:30.649 user 0m33.373s 00:21:30.649 sys 0m4.678s 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:30.649 ************************************ 00:21:30.649 END TEST nvmf_digest_error 00:21:30.649 ************************************ 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.649 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.908 rmmod nvme_tcp 00:21:30.908 rmmod nvme_fabrics 00:21:30.908 rmmod nvme_keyring 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 95136 ']' 00:21:30.908 Process with pid 95136 is not found 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 95136 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 95136 ']' 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 95136 00:21:30.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95136) - No such process 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 95136 is not found' 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.908 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.909 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.909 22:01:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:30.909 00:21:30.909 real 0m36.801s 00:21:30.909 user 1m9.440s 00:21:30.909 sys 0m9.835s 00:21:30.909 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:30.909 ************************************ 00:21:30.909 22:01:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:30.909 END TEST nvmf_digest 00:21:30.909 ************************************ 00:21:30.909 22:01:36 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:21:30.909 22:01:36 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:21:30.909 22:01:36 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:30.909 22:01:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:30.909 22:01:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:30.909 22:01:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.909 ************************************ 00:21:30.909 START TEST nvmf_host_multipath 00:21:30.909 ************************************ 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:30.909 * Looking for test storage... 00:21:30.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.909 22:01:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:31.168 Cannot find device "nvmf_tgt_br" 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.168 Cannot find device "nvmf_tgt_br2" 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:31.168 Cannot find device "nvmf_tgt_br" 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:31.168 Cannot find device "nvmf_tgt_br2" 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.168 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.169 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.427 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:31.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:21:31.428 00:21:31.428 --- 10.0.0.2 ping statistics --- 00:21:31.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.428 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:31.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:31.428 00:21:31.428 --- 10.0.0.3 ping statistics --- 00:21:31.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.428 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:31.428 00:21:31.428 --- 10.0.0.1 ping statistics --- 00:21:31.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.428 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:31.428 22:01:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95607 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95607 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 95607 ']' 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.428 22:01:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.428 [2024-07-24 22:01:37.084019] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:31.428 [2024-07-24 22:01:37.084145] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.687 [2024-07-24 22:01:37.232304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:31.687 [2024-07-24 22:01:37.320091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.687 [2024-07-24 22:01:37.320448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.687 [2024-07-24 22:01:37.320639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.687 [2024-07-24 22:01:37.320802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.687 [2024-07-24 22:01:37.320872] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.687 [2024-07-24 22:01:37.321142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.687 [2024-07-24 22:01:37.321151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.687 [2024-07-24 22:01:37.380153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95607 00:21:32.622 22:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:32.622 [2024-07-24 22:01:38.332721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.881 22:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:33.140 Malloc0 00:21:33.140 22:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:33.398 22:01:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.657 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.657 [2024-07-24 22:01:39.309281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.657 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:33.916 [2024-07-24 22:01:39.573372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95666 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:33.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95666 /var/tmp/bdevperf.sock 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 95666 ']' 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.916 22:01:39 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:34.851 22:01:40 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:34.851 22:01:40 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:21:34.851 22:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:35.110 22:01:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:35.676 Nvme0n1 00:21:35.676 22:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:35.935 Nvme0n1 00:21:35.936 22:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.936 22:01:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:36.874 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:36.874 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:37.132 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:37.390 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:37.390 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95706 00:21:37.390 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:37.390 22:01:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:43.950 22:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.950 22:01:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.950 Attaching 4 probes... 00:21:43.950 @path[10.0.0.2, 4421]: 18365 00:21:43.950 @path[10.0.0.2, 4421]: 18780 00:21:43.950 @path[10.0.0.2, 4421]: 19113 00:21:43.950 @path[10.0.0.2, 4421]: 18884 00:21:43.950 @path[10.0.0.2, 4421]: 18856 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95706 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:43.950 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:44.208 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:44.208 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:44.208 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95824 00:21:44.208 22:01:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.762 Attaching 4 probes... 00:21:50.762 @path[10.0.0.2, 4420]: 18363 00:21:50.762 @path[10.0.0.2, 4420]: 18451 00:21:50.762 @path[10.0.0.2, 4420]: 18665 00:21:50.762 @path[10.0.0.2, 4420]: 19031 00:21:50.762 @path[10.0.0.2, 4420]: 19168 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95824 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:50.762 22:01:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:50.762 22:01:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:51.021 22:01:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:51.021 22:01:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95935 00:21:51.021 22:01:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:51.021 22:01:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.708 Attaching 4 probes... 00:21:57.708 @path[10.0.0.2, 4421]: 15138 00:21:57.708 @path[10.0.0.2, 4421]: 18293 00:21:57.708 @path[10.0.0.2, 4421]: 18496 00:21:57.708 @path[10.0.0.2, 4421]: 18272 00:21:57.708 @path[10.0.0.2, 4421]: 18500 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95935 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:57.708 22:02:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:57.708 22:02:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:57.708 22:02:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:57.708 22:02:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96049 00:21:57.708 22:02:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:57.708 22:02:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.272 Attaching 4 probes... 00:22:04.272 00:22:04.272 00:22:04.272 00:22:04.272 00:22:04.272 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96049 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:04.272 22:02:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:04.531 22:02:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:04.531 22:02:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96167 00:22:04.531 22:02:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:04.531 22:02:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.124 Attaching 4 probes... 00:22:11.124 @path[10.0.0.2, 4421]: 17978 00:22:11.124 @path[10.0.0.2, 4421]: 18312 00:22:11.124 @path[10.0.0.2, 4421]: 18391 00:22:11.124 @path[10.0.0.2, 4421]: 18650 00:22:11.124 @path[10.0.0.2, 4421]: 18507 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96167 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:11.124 [2024-07-24 22:02:16.677844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009c60 is same with the state(5) to be set 00:22:11.124 [2024-07-24 22:02:16.678122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009c60 is same with the state(5) to be set 00:22:11.124 [2024-07-24 22:02:16.678278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009c60 is same with the state(5) to be set 00:22:11.124 22:02:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:12.059 22:02:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:12.059 22:02:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96285 00:22:12.059 22:02:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:12.059 22:02:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.620 Attaching 4 probes... 00:22:18.620 @path[10.0.0.2, 4420]: 18000 00:22:18.620 @path[10.0.0.2, 4420]: 18284 00:22:18.620 @path[10.0.0.2, 4420]: 18472 00:22:18.620 @path[10.0.0.2, 4420]: 18312 00:22:18.620 @path[10.0.0.2, 4420]: 18347 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.620 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.621 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96285 00:22:18.621 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.621 22:02:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.621 [2024-07-24 22:02:24.283306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.621 22:02:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.879 22:02:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:25.441 22:02:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:25.441 22:02:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96460 00:22:25.441 22:02:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95607 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.441 22:02:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.014 Attaching 4 probes... 00:22:32.014 @path[10.0.0.2, 4421]: 18041 00:22:32.014 @path[10.0.0.2, 4421]: 18256 00:22:32.014 @path[10.0.0.2, 4421]: 18308 00:22:32.014 @path[10.0.0.2, 4421]: 18418 00:22:32.014 @path[10.0.0.2, 4421]: 18542 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96460 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95666 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 95666 ']' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 95666 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95666 00:22:32.014 killing process with pid 95666 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95666' 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 95666 00:22:32.014 22:02:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 95666 00:22:32.014 Connection closed with partial response: 00:22:32.014 00:22:32.014 00:22:32.014 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95666 00:22:32.014 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.014 [2024-07-24 22:01:39.644721] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:32.014 [2024-07-24 22:01:39.644974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95666 ] 00:22:32.014 [2024-07-24 22:01:39.787732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.014 [2024-07-24 22:01:39.864187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.014 [2024-07-24 22:01:39.922049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:32.014 Running I/O for 90 seconds... 00:22:32.014 [2024-07-24 22:01:49.686020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.014 [2024-07-24 22:01:49.686764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.686967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.686987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.014 [2024-07-24 22:01:49.687009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.014 [2024-07-24 22:01:49.687044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.687058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.687850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.687884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.687919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.687982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.687998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.688047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.688082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.015 [2024-07-24 22:01:49.688167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.015 [2024-07-24 22:01:49.688525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.015 [2024-07-24 22:01:49.688539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.688890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.688933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.688970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.688991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.016 [2024-07-24 22:01:49.689453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.689984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.689999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.016 [2024-07-24 22:01:49.690034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.016 [2024-07-24 22:01:49.690056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.690070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.690310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.690324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.691877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.017 [2024-07-24 22:01:49.691922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.691951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.691968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.691990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:49.692828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:49.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.233862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.233921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.233989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.017 [2024-07-24 22:01:56.234310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.017 [2024-07-24 22:01:56.234323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.018 [2024-07-24 22:01:56.234788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.234960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.234979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.018 [2024-07-24 22:01:56.235384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.018 [2024-07-24 22:01:56.235398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.235811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.235975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.235989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.019 [2024-07-24 22:01:56.236470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.019 [2024-07-24 22:01:56.236783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.019 [2024-07-24 22:01:56.236828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.236872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.236887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.236908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.236922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.236942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.236957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.236978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.236993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.237669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.237913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.237927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.020 [2024-07-24 22:01:56.238587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.238967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.238984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.239012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.020 [2024-07-24 22:01:56.239027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.020 [2024-07-24 22:01:56.239070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:01:56.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:01:56.239499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.309889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.309977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.309997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.310410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.310956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.310984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.311003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.311017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.311037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.021 [2024-07-24 22:02:03.311066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.311101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.021 [2024-07-24 22:02:03.311116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:32.021 [2024-07-24 22:02:03.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.311997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.022 [2024-07-24 22:02:03.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.312478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.312528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.312563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.022 [2024-07-24 22:02:03.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.022 [2024-07-24 22:02:03.312633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.312647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.312699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.312736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.312771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.312827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.312903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.312938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.312975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.312996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.023 [2024-07-24 22:02:03.313672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:32.023 [2024-07-24 22:02:03.313843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.023 [2024-07-24 22:02:03.313857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.313878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.313893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.313914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.313929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.313949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.313964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.313984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.313999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.314929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.314964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.314980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:03.315310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:03.315679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:03.315709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.024 [2024-07-24 22:02:16.678903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:16.678936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:16.678969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.678988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:16.679001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:32.024 [2024-07-24 22:02:16.679021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.024 [2024-07-24 22:02:16.679034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.025 [2024-07-24 22:02:16.679790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.679968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.679981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.025 [2024-07-24 22:02:16.680219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.025 [2024-07-24 22:02:16.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.680971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.680986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.680999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.026 [2024-07-24 22:02:16.681216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-24 22:02:16.681404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.026 [2024-07-24 22:02:16.681419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-24 22:02:16.681696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02: 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.027 16.681868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.027 [2024-07-24 22:02:16.681887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019640 is same with the state(5) to be set 00:22:32.027 [2024-07-24 22:02:16.681916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.681926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.681936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41248 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.681953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.681966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.681975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.681985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41640 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41648 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41656 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41664 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41672 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41680 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41688 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41696 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.027 [2024-07-24 22:02:16.682334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41704 len:8 PRP1 0x0 PRP2 0x0 00:22:32.027 [2024-07-24 22:02:16.682346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.027 [2024-07-24 22:02:16.682358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.027 [2024-07-24 22:02:16.682367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41712 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41720 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41728 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41736 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41744 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41752 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41760 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41768 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41776 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41784 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.028 [2024-07-24 22:02:16.682857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.028 [2024-07-24 22:02:16.682866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41792 len:8 PRP1 0x0 PRP2 0x0 00:22:32.028 [2024-07-24 22:02:16.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.682937] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2019640 was disconnected and freed. reset controller. 00:22:32.028 [2024-07-24 22:02:16.684098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:32.028 [2024-07-24 22:02:16.684175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.028 [2024-07-24 22:02:16.684197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.028 [2024-07-24 22:02:16.684226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201c270 (9): Bad file descriptor 00:22:32.028 [2024-07-24 22:02:16.684681] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.028 [2024-07-24 22:02:16.684712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x201c270 with addr=10.0.0.2, port=4421 00:22:32.028 [2024-07-24 22:02:16.684728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c270 is same with the state(5) to be set 00:22:32.028 [2024-07-24 22:02:16.684785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x201c270 (9): Bad file descriptor 00:22:32.028 [2024-07-24 22:02:16.684837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.028 [2024-07-24 22:02:16.684859] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:32.028 [2024-07-24 22:02:16.684874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.028 [2024-07-24 22:02:16.684906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:32.028 [2024-07-24 22:02:16.684922] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:32.028 [2024-07-24 22:02:26.752574] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:32.028 Received shutdown signal, test time was about 55.356652 seconds 00:22:32.028 00:22:32.028 Latency(us) 00:22:32.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.028 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.028 Verification LBA range: start 0x0 length 0x4000 00:22:32.028 Nvme0n1 : 55.36 7872.23 30.75 0.00 0.00 16226.97 774.52 7015926.69 00:22:32.028 =================================================================================================================== 00:22:32.028 Total : 7872.23 30.75 0.00 0.00 16226.97 774.52 7015926.69 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.028 rmmod nvme_tcp 00:22:32.028 rmmod nvme_fabrics 00:22:32.028 rmmod nvme_keyring 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95607 ']' 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95607 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 95607 ']' 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 95607 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95607 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:32.028 killing process with pid 95607 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95607' 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 95607 00:22:32.028 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 95607 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:32.288 00:22:32.288 real 1m1.265s 00:22:32.288 user 2m50.657s 00:22:32.288 sys 0m17.691s 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:32.288 22:02:37 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:32.288 ************************************ 00:22:32.288 END TEST nvmf_host_multipath 00:22:32.288 ************************************ 00:22:32.288 22:02:37 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:32.288 22:02:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:32.288 22:02:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:32.288 22:02:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.288 ************************************ 00:22:32.288 START TEST nvmf_timeout 00:22:32.288 ************************************ 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:32.288 * Looking for test storage... 00:22:32.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:32.288 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:32.289 Cannot find device "nvmf_tgt_br" 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.289 Cannot find device "nvmf_tgt_br2" 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:32.289 Cannot find device "nvmf_tgt_br" 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:32.289 22:02:37 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:32.547 Cannot find device "nvmf_tgt_br2" 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.547 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.548 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:32.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:32.831 00:22:32.831 --- 10.0.0.2 ping statistics --- 00:22:32.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.831 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:32.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:22:32.831 00:22:32.831 --- 10.0.0.3 ping statistics --- 00:22:32.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.831 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:32.831 00:22:32.831 --- 10.0.0.1 ping statistics --- 00:22:32.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.831 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96760 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96760 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 96760 ']' 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:32.831 22:02:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.831 [2024-07-24 22:02:38.367630] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:32.831 [2024-07-24 22:02:38.367751] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.831 [2024-07-24 22:02:38.510137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:33.118 [2024-07-24 22:02:38.600715] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.118 [2024-07-24 22:02:38.600766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.118 [2024-07-24 22:02:38.600781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.118 [2024-07-24 22:02:38.600791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.118 [2024-07-24 22:02:38.600813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.118 [2024-07-24 22:02:38.600924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.118 [2024-07-24 22:02:38.601385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.118 [2024-07-24 22:02:38.661223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.686 22:02:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.686 22:02:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:33.686 22:02:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.686 22:02:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.686 22:02:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.687 22:02:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.687 22:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.687 22:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:33.946 [2024-07-24 22:02:39.553858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.946 22:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:34.205 Malloc0 00:22:34.205 22:02:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.463 22:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.722 22:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.981 [2024-07-24 22:02:40.480732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96809 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96809 /var/tmp/bdevperf.sock 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 96809 ']' 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.981 22:02:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.981 [2024-07-24 22:02:40.535130] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:34.981 [2024-07-24 22:02:40.535225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96809 ] 00:22:34.981 [2024-07-24 22:02:40.670545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.240 [2024-07-24 22:02:40.760088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.240 [2024-07-24 22:02:40.818537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:35.807 22:02:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:35.807 22:02:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:35.807 22:02:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:36.066 22:02:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:36.325 NVMe0n1 00:22:36.325 22:02:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96833 00:22:36.325 22:02:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.325 22:02:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:36.584 Running I/O for 10 seconds... 00:22:37.523 22:02:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.523 [2024-07-24 22:02:43.175279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.175643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.175800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.175969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.176980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-07-24 22:02:43.177234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-07-24 22:02:43.177245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-07-24 22:02:43.177561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-07-24 22:02:43.177580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-07-24 22:02:43.177760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.177986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.177997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.178006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-07-24 22:02:43.178018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-07-24 22:02:43.178027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.178601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.178982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.179055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.179437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.179523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.179576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.179769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.179839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.179893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.179945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.180073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.180185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.180304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.180371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.180630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.180774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.180849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.180904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-07-24 22:02:43.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-07-24 22:02:43.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.526 [2024-07-24 22:02:43.181493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1785e00 is same with the state(5) to be set 00:22:37.526 [2024-07-24 22:02:43.181518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-07-24 22:02:43.181526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-07-24 22:02:43.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65424 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-07-24 22:02:43.181543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-07-24 22:02:43.181596] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1785e00 was disconnected and freed. reset controller. 00:22:37.526 [2024-07-24 22:02:43.181876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.526 [2024-07-24 22:02:43.181957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17521e0 (9): Bad file descriptor 00:22:37.526 [2024-07-24 22:02:43.182051] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.526 [2024-07-24 22:02:43.182072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17521e0 with addr=10.0.0.2, port=4420 00:22:37.526 [2024-07-24 22:02:43.182083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17521e0 is same with the state(5) to be set 00:22:37.526 [2024-07-24 22:02:43.182101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17521e0 (9): Bad file descriptor 00:22:37.526 [2024-07-24 22:02:43.182116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.526 [2024-07-24 22:02:43.182125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:37.526 [2024-07-24 22:02:43.182137] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.526 [2024-07-24 22:02:43.182156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.526 [2024-07-24 22:02:43.182166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.526 22:02:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:40.056 [2024-07-24 22:02:45.182378] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.056 [2024-07-24 22:02:45.182630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17521e0 with addr=10.0.0.2, port=4420 00:22:40.056 [2024-07-24 22:02:45.182778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17521e0 is same with the state(5) to be set 00:22:40.056 [2024-07-24 22:02:45.182920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17521e0 (9): Bad file descriptor 00:22:40.056 [2024-07-24 22:02:45.183078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.056 [2024-07-24 22:02:45.183218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.056 [2024-07-24 22:02:45.183374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.056 [2024-07-24 22:02:45.183433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.056 [2024-07-24 22:02:45.183654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:40.056 22:02:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:41.985 [2024-07-24 22:02:47.183904] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.985 [2024-07-24 22:02:47.184151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17521e0 with addr=10.0.0.2, port=4420 00:22:41.985 [2024-07-24 22:02:47.184298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17521e0 is same with the state(5) to be set 00:22:41.985 [2024-07-24 22:02:47.184333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17521e0 (9): Bad file descriptor 00:22:41.985 [2024-07-24 22:02:47.184353] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.985 [2024-07-24 22:02:47.184363] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.985 [2024-07-24 22:02:47.184374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.985 [2024-07-24 22:02:47.184401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.985 [2024-07-24 22:02:47.184413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.885 [2024-07-24 22:02:49.184479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.885 [2024-07-24 22:02:49.184547] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:43.885 [2024-07-24 22:02:49.184558] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:43.885 [2024-07-24 22:02:49.184568] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:43.885 [2024-07-24 22:02:49.184593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:44.820 00:22:44.820 Latency(us) 00:22:44.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.820 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.820 Verification LBA range: start 0x0 length 0x4000 00:22:44.820 NVMe0n1 : 8.13 994.93 3.89 15.75 0.00 126410.63 3932.16 7015926.69 00:22:44.820 =================================================================================================================== 00:22:44.820 Total : 994.93 3.89 15.75 0.00 126410.63 3932.16 7015926.69 00:22:44.820 0 00:22:45.078 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:45.078 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.079 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:45.337 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:45.337 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:45.337 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:45.337 22:02:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96833 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96809 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 96809 ']' 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 96809 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96809 00:22:45.596 killing process with pid 96809 00:22:45.596 Received shutdown signal, test time was about 9.157343 seconds 00:22:45.596 00:22:45.596 Latency(us) 00:22:45.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.596 =================================================================================================================== 00:22:45.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96809' 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 96809 00:22:45.596 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 96809 00:22:45.854 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.113 [2024-07-24 22:02:51.667091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96949 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96949 /var/tmp/bdevperf.sock 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 96949 ']' 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:46.113 22:02:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:46.113 [2024-07-24 22:02:51.729705] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:46.113 [2024-07-24 22:02:51.730010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96949 ] 00:22:46.372 [2024-07-24 22:02:51.866666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.372 [2024-07-24 22:02:51.947344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.372 [2024-07-24 22:02:52.004656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:47.324 22:02:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.324 22:02:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:47.324 22:02:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.324 22:02:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:47.584 NVMe0n1 00:22:47.584 22:02:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.584 22:02:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96973 00:22:47.584 22:02:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:47.841 Running I/O for 10 seconds... 00:22:48.780 22:02:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.780 [2024-07-24 22:02:54.427721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.780 [2024-07-24 22:02:54.428727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.428903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.428928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.428942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.428953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.428965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.428975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.428986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.428996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.780 [2024-07-24 22:02:54.429515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.780 [2024-07-24 22:02:54.429523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.429669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.429677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.781 [2024-07-24 22:02:54.432927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.781 [2024-07-24 22:02:54.432947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.781 [2024-07-24 22:02:54.432967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.781 [2024-07-24 22:02:54.432987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.432998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.781 [2024-07-24 22:02:54.433007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.781 [2024-07-24 22:02:54.433018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.781 [2024-07-24 22:02:54.433027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.782 [2024-07-24 22:02:54.433047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.782 [2024-07-24 22:02:54.433067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.782 [2024-07-24 22:02:54.433863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.782 [2024-07-24 22:02:54.433874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.433897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.433917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.433938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.433958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.433979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.433988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.783 [2024-07-24 22:02:54.434337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7ed0 is same with the state(5) to be set 00:22:48.783 [2024-07-24 22:02:54.434361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.783 [2024-07-24 22:02:54.434369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.783 [2024-07-24 22:02:54.434387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61184 len:8 PRP1 0x0 PRP2 0x0 00:22:48.783 [2024-07-24 22:02:54.434397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434464] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb7ed0 was disconnected and freed. reset controller. 00:22:48.783 [2024-07-24 22:02:54.434573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.783 [2024-07-24 22:02:54.434590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.783 [2024-07-24 22:02:54.434631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.783 [2024-07-24 22:02:54.434653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.783 [2024-07-24 22:02:54.434672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.783 [2024-07-24 22:02:54.434682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:22:48.783 [2024-07-24 22:02:54.434913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.783 [2024-07-24 22:02:54.434942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:22:48.783 [2024-07-24 22:02:54.435037] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.783 [2024-07-24 22:02:54.435059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:22:48.783 [2024-07-24 22:02:54.435081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:22:48.783 [2024-07-24 22:02:54.435099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:22:48.783 [2024-07-24 22:02:54.435116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.783 [2024-07-24 22:02:54.435126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:48.783 [2024-07-24 22:02:54.435136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.783 [2024-07-24 22:02:54.435169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.783 [2024-07-24 22:02:54.435180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.783 22:02:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:49.721 [2024-07-24 22:02:55.435321] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.721 [2024-07-24 22:02:55.435389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:22:49.721 [2024-07-24 22:02:55.435405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:22:49.721 [2024-07-24 22:02:55.435430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:22:49.721 [2024-07-24 22:02:55.435449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.721 [2024-07-24 22:02:55.435458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.721 [2024-07-24 22:02:55.435469] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.721 [2024-07-24 22:02:55.435511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.721 [2024-07-24 22:02:55.435534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.980 22:02:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.980 [2024-07-24 22:02:55.682199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.239 22:02:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96973 00:22:50.807 [2024-07-24 22:02:56.453686] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.928 00:22:58.928 Latency(us) 00:22:58.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.928 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.928 Verification LBA range: start 0x0 length 0x4000 00:22:58.928 NVMe0n1 : 10.01 5854.45 22.87 0.00 0.00 21816.41 1228.80 3019898.88 00:22:58.928 =================================================================================================================== 00:22:58.928 Total : 5854.45 22.87 0.00 0.00 21816.41 1228.80 3019898.88 00:22:58.928 0 00:22:58.928 22:03:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97078 00:22:58.928 22:03:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.928 22:03:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:58.928 Running I/O for 10 seconds... 00:22:58.928 22:03:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.928 [2024-07-24 22:03:04.622635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.928 [2024-07-24 22:03:04.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.928 [2024-07-24 22:03:04.622915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.928 [2024-07-24 22:03:04.622924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.622946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.622957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.622966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.622986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.622996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.929 [2024-07-24 22:03:04.623818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.929 [2024-07-24 22:03:04.623828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.623984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.623993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.930 [2024-07-24 22:03:04.624014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfba460 is same with the state(5) to be set 00:22:58.930 [2024-07-24 22:03:04.624037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64864 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64344 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64352 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64360 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64368 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64376 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64384 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64896 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:22:58.930 [2024-07-24 22:03:04.624664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.930 [2024-07-24 22:03:04.624679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.930 [2024-07-24 22:03:04.624687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.930 [2024-07-24 22:03:04.624695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64984 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.624961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65000 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.624970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.624984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.624992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65024 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65032 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65040 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65048 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65056 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65064 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65072 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65088 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65096 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65104 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65120 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.931 [2024-07-24 22:03:04.625584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.931 [2024-07-24 22:03:04.625592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.931 [2024-07-24 22:03:04.625600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65128 len:8 PRP1 0x0 PRP2 0x0 00:22:58.931 [2024-07-24 22:03:04.625617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65136 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.625823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.625831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.625839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.625848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65200 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65248 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.635969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.635977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65264 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.635985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.635994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65304 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.932 [2024-07-24 22:03:04.636164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.932 [2024-07-24 22:03:04.636181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:22:58.932 [2024-07-24 22:03:04.636190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.932 [2024-07-24 22:03:04.636199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.933 [2024-07-24 22:03:04.636206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.933 [2024-07-24 22:03:04.636214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65320 len:8 PRP1 0x0 PRP2 0x0 00:22:58.933 [2024-07-24 22:03:04.636223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.933 [2024-07-24 22:03:04.636240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.933 [2024-07-24 22:03:04.636247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 PRP1 0x0 PRP2 0x0 00:22:58.933 [2024-07-24 22:03:04.636256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.933 [2024-07-24 22:03:04.636283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.933 [2024-07-24 22:03:04.636290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65336 len:8 PRP1 0x0 PRP2 0x0 00:22:58.933 [2024-07-24 22:03:04.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.933 [2024-07-24 22:03:04.636315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.933 [2024-07-24 22:03:04.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:22:58.933 [2024-07-24 22:03:04.636331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636391] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfba460 was disconnected and freed. reset controller. 00:22:58.933 [2024-07-24 22:03:04.636481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.933 [2024-07-24 22:03:04.636497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.933 [2024-07-24 22:03:04.636523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.933 [2024-07-24 22:03:04.636545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.933 [2024-07-24 22:03:04.636571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.933 [2024-07-24 22:03:04.636580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:22:58.933 [2024-07-24 22:03:04.636843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.933 [2024-07-24 22:03:04.636875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:22:58.933 [2024-07-24 22:03:04.636965] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.933 [2024-07-24 22:03:04.636987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:22:58.933 [2024-07-24 22:03:04.636998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:22:58.933 [2024-07-24 22:03:04.637015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:22:58.933 [2024-07-24 22:03:04.637031] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.933 [2024-07-24 22:03:04.637041] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.933 [2024-07-24 22:03:04.637051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.933 [2024-07-24 22:03:04.637071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.933 [2024-07-24 22:03:04.637082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.192 22:03:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:00.129 [2024-07-24 22:03:05.637257] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.129 [2024-07-24 22:03:05.637543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:23:00.129 [2024-07-24 22:03:05.637707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:23:00.129 [2024-07-24 22:03:05.638001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:23:00.129 [2024-07-24 22:03:05.638154] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.129 [2024-07-24 22:03:05.638274] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.129 [2024-07-24 22:03:05.638339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.129 [2024-07-24 22:03:05.638471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.129 [2024-07-24 22:03:05.638535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:01.066 [2024-07-24 22:03:06.638872] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.066 [2024-07-24 22:03:06.639182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:23:01.066 [2024-07-24 22:03:06.639325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:23:01.066 [2024-07-24 22:03:06.639483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:23:01.066 [2024-07-24 22:03:06.639536] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.066 [2024-07-24 22:03:06.639550] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:01.066 [2024-07-24 22:03:06.639562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.066 [2024-07-24 22:03:06.639588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.066 [2024-07-24 22:03:06.639600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.002 [2024-07-24 22:03:07.641376] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.002 [2024-07-24 22:03:07.641458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83020 with addr=10.0.0.2, port=4420 00:23:02.002 [2024-07-24 22:03:07.641476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83020 is same with the state(5) to be set 00:23:02.002 [2024-07-24 22:03:07.641783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83020 (9): Bad file descriptor 00:23:02.002 [2024-07-24 22:03:07.642021] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.002 [2024-07-24 22:03:07.642051] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.002 [2024-07-24 22:03:07.642062] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.002 [2024-07-24 22:03:07.645811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.002 [2024-07-24 22:03:07.645842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.002 22:03:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.260 [2024-07-24 22:03:07.889098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.260 22:03:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97078 00:23:03.195 [2024-07-24 22:03:08.680562] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.460 00:23:08.461 Latency(us) 00:23:08.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.461 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.461 Verification LBA range: start 0x0 length 0x4000 00:23:08.461 NVMe0n1 : 10.01 5500.59 21.49 3758.32 0.00 13783.00 707.49 3035150.89 00:23:08.461 =================================================================================================================== 00:23:08.461 Total : 5500.59 21.49 3758.32 0.00 13783.00 0.00 3035150.89 00:23:08.461 0 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96949 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 96949 ']' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 96949 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96949 00:23:08.461 killing process with pid 96949 00:23:08.461 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.461 00:23:08.461 Latency(us) 00:23:08.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.461 =================================================================================================================== 00:23:08.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96949' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 96949 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 96949 00:23:08.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97192 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97192 /var/tmp/bdevperf.sock 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 97192 ']' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:08.461 22:03:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:08.461 [2024-07-24 22:03:13.770375] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:08.461 [2024-07-24 22:03:13.770657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97192 ] 00:23:08.461 [2024-07-24 22:03:13.899482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.461 [2024-07-24 22:03:13.968566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.461 [2024-07-24 22:03:14.021683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:09.397 22:03:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.397 22:03:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:23:09.397 22:03:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97192 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:09.397 22:03:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97207 00:23:09.397 22:03:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:09.397 22:03:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:09.655 NVMe0n1 00:23:09.655 22:03:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97244 00:23:09.655 22:03:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.655 22:03:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:09.913 Running I/O for 10 seconds... 00:23:10.852 22:03:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.852 [2024-07-24 22:03:16.549220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.852 [2024-07-24 22:03:16.549340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 22:03:16.549356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.852 he state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.852 [2024-07-24 22:03:16.549373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.852 [2024-07-24 22:03:16.549382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.852 [2024-07-24 22:03:16.549391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.852 [2024-07-24 22:03:16.549399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-24 22:03:16.549408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:10.852 he state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 22:03:16.549417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.852 he state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331040 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.852 [2024-07-24 22:03:16.549773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.549994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6471b0 is same with the state(5) to be set 00:23:10.853 [2024-07-24 22:03:16.550459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.853 [2024-07-24 22:03:16.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.853 [2024-07-24 22:03:16.550629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.550988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.550997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.854 [2024-07-24 22:03:16.551389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.854 [2024-07-24 22:03:16.551398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.551985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.551996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.855 [2024-07-24 22:03:16.552240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.855 [2024-07-24 22:03:16.552249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.856 [2024-07-24 22:03:16.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.856 [2024-07-24 22:03:16.552916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.552932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.552941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.552952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.552961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.552977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.552987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.857 [2024-07-24 22:03:16.553272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343860 is same with the state(5) to be set 00:23:10.857 [2024-07-24 22:03:16.553292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.857 [2024-07-24 22:03:16.553304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.857 [2024-07-24 22:03:16.553313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:23:10.857 [2024-07-24 22:03:16.553322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.857 [2024-07-24 22:03:16.553375] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2343860 was disconnected and freed. reset controller. 00:23:10.857 [2024-07-24 22:03:16.553655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.857 [2024-07-24 22:03:16.553687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2331040 (9): Bad file descriptor 00:23:10.857 [2024-07-24 22:03:16.553800] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.857 [2024-07-24 22:03:16.553827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2331040 with addr=10.0.0.2, port=4420 00:23:10.857 [2024-07-24 22:03:16.553838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331040 is same with the state(5) to be set 00:23:10.857 [2024-07-24 22:03:16.553860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2331040 (9): Bad file descriptor 00:23:10.857 [2024-07-24 22:03:16.553876] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.857 [2024-07-24 22:03:16.553886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.857 [2024-07-24 22:03:16.553897] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.857 [2024-07-24 22:03:16.553917] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.857 [2024-07-24 22:03:16.553927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.115 22:03:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97244 00:23:13.018 [2024-07-24 22:03:18.554207] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.018 [2024-07-24 22:03:18.554253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2331040 with addr=10.0.0.2, port=4420 00:23:13.018 [2024-07-24 22:03:18.554269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331040 is same with the state(5) to be set 00:23:13.018 [2024-07-24 22:03:18.554292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2331040 (9): Bad file descriptor 00:23:13.018 [2024-07-24 22:03:18.554322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.018 [2024-07-24 22:03:18.554335] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:13.018 [2024-07-24 22:03:18.554346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.018 [2024-07-24 22:03:18.554374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.018 [2024-07-24 22:03:18.554385] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.920 [2024-07-24 22:03:20.554682] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.920 [2024-07-24 22:03:20.554746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2331040 with addr=10.0.0.2, port=4420 00:23:14.920 [2024-07-24 22:03:20.554762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331040 is same with the state(5) to be set 00:23:14.920 [2024-07-24 22:03:20.554785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2331040 (9): Bad file descriptor 00:23:14.920 [2024-07-24 22:03:20.554805] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:14.920 [2024-07-24 22:03:20.554815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:14.920 [2024-07-24 22:03:20.554826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:14.920 [2024-07-24 22:03:20.554852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.920 [2024-07-24 22:03:20.554864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.451 [2024-07-24 22:03:22.555057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.451 [2024-07-24 22:03:22.555102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:17.451 [2024-07-24 22:03:22.555115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:17.451 [2024-07-24 22:03:22.555125] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:17.451 [2024-07-24 22:03:22.555152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.018 00:23:18.018 Latency(us) 00:23:18.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.018 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:18.018 NVMe0n1 : 8.13 2139.54 8.36 15.74 0.00 59289.38 7566.43 7015926.69 00:23:18.018 =================================================================================================================== 00:23:18.018 Total : 2139.54 8.36 15.74 0.00 59289.38 7566.43 7015926.69 00:23:18.018 0 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.018 Attaching 5 probes... 00:23:18.018 1258.350117: reset bdev controller NVMe0 00:23:18.018 1258.426635: reconnect bdev controller NVMe0 00:23:18.018 3258.802439: reconnect delay bdev controller NVMe0 00:23:18.018 3258.821711: reconnect bdev controller NVMe0 00:23:18.018 5259.273854: reconnect delay bdev controller NVMe0 00:23:18.018 5259.291889: reconnect bdev controller NVMe0 00:23:18.018 7259.720107: reconnect delay bdev controller NVMe0 00:23:18.018 7259.756741: reconnect bdev controller NVMe0 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97207 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97192 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 97192 ']' 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 97192 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97192 00:23:18.018 killing process with pid 97192 00:23:18.018 Received shutdown signal, test time was about 8.187242 seconds 00:23:18.018 00:23:18.018 Latency(us) 00:23:18.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.018 =================================================================================================================== 00:23:18.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97192' 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 97192 00:23:18.018 22:03:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 97192 00:23:18.277 22:03:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.535 rmmod nvme_tcp 00:23:18.535 rmmod nvme_fabrics 00:23:18.535 rmmod nvme_keyring 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96760 ']' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96760 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 96760 ']' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 96760 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96760 00:23:18.535 killing process with pid 96760 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96760' 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 96760 00:23:18.535 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 96760 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:18.816 ************************************ 00:23:18.816 END TEST nvmf_timeout 00:23:18.816 ************************************ 00:23:18.816 00:23:18.816 real 0m46.574s 00:23:18.816 user 2m16.633s 00:23:18.816 sys 0m5.590s 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.816 22:03:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:18.816 22:03:24 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:23:18.816 22:03:24 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:23:18.816 22:03:24 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.816 22:03:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.816 22:03:24 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:23:18.816 ************************************ 00:23:18.816 END TEST nvmf_tcp 00:23:18.816 ************************************ 00:23:18.816 00:23:18.816 real 14m49.268s 00:23:18.816 user 39m15.494s 00:23:18.816 sys 4m6.242s 00:23:18.816 22:03:24 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.816 22:03:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.816 22:03:24 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:23:18.816 22:03:24 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:18.816 22:03:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:18.816 22:03:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:18.816 22:03:24 -- common/autotest_common.sh@10 -- # set +x 00:23:19.086 ************************************ 00:23:19.086 START TEST nvmf_dif 00:23:19.086 ************************************ 00:23:19.086 22:03:24 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:19.086 * Looking for test storage... 00:23:19.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:19.086 22:03:24 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.086 22:03:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.087 22:03:24 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.087 22:03:24 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.087 22:03:24 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.087 22:03:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.087 22:03:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.087 22:03:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.087 22:03:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:19.087 22:03:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.087 22:03:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:19.087 22:03:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:19.087 22:03:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:19.087 22:03:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:19.087 22:03:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.087 22:03:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:19.087 22:03:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:19.087 Cannot find device "nvmf_tgt_br" 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.087 Cannot find device "nvmf_tgt_br2" 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:19.087 Cannot find device "nvmf_tgt_br" 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:19.087 Cannot find device "nvmf_tgt_br2" 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:19.087 22:03:24 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:19.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:23:19.376 00:23:19.376 --- 10.0.0.2 ping statistics --- 00:23:19.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.376 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:19.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:19.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:19.376 00:23:19.376 --- 10.0.0.3 ping statistics --- 00:23:19.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.376 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:19.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:19.376 00:23:19.376 --- 10.0.0.1 ping statistics --- 00:23:19.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.376 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:19.376 22:03:24 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:19.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.635 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:19.635 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:19.635 22:03:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.635 22:03:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.635 22:03:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.636 22:03:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.636 22:03:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.636 22:03:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.895 22:03:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:19.895 22:03:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:19.895 22:03:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:19.895 22:03:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97677 00:23:19.895 22:03:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97677 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 97677 ']' 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.895 22:03:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.895 22:03:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:19.895 [2024-07-24 22:03:25.439882] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:19.895 [2024-07-24 22:03:25.439980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.895 [2024-07-24 22:03:25.579844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.154 [2024-07-24 22:03:25.664719] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.154 [2024-07-24 22:03:25.664784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.154 [2024-07-24 22:03:25.664799] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.154 [2024-07-24 22:03:25.664835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.154 [2024-07-24 22:03:25.664857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.154 [2024-07-24 22:03:25.664887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.154 [2024-07-24 22:03:25.722257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:20.721 22:03:26 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:20.721 22:03:26 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:23:20.721 22:03:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.721 22:03:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.721 22:03:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:20.721 22:03:26 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.721 22:03:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:20.721 22:03:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 [2024-07-24 22:03:26.445191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.981 22:03:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 ************************************ 00:23:20.981 START TEST fio_dif_1_default 00:23:20.981 ************************************ 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 bdev_null0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 [2024-07-24 22:03:26.489233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.981 { 00:23:20.981 "params": { 00:23:20.981 "name": "Nvme$subsystem", 00:23:20.981 "trtype": "$TEST_TRANSPORT", 00:23:20.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.981 "adrfam": "ipv4", 00:23:20.981 "trsvcid": "$NVMF_PORT", 00:23:20.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.981 "hdgst": ${hdgst:-false}, 00:23:20.981 "ddgst": ${ddgst:-false} 00:23:20.981 }, 00:23:20.981 "method": "bdev_nvme_attach_controller" 00:23:20.981 } 00:23:20.981 EOF 00:23:20.981 )") 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:20.981 "params": { 00:23:20.981 "name": "Nvme0", 00:23:20.981 "trtype": "tcp", 00:23:20.981 "traddr": "10.0.0.2", 00:23:20.981 "adrfam": "ipv4", 00:23:20.981 "trsvcid": "4420", 00:23:20.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:20.981 "hdgst": false, 00:23:20.981 "ddgst": false 00:23:20.981 }, 00:23:20.981 "method": "bdev_nvme_attach_controller" 00:23:20.981 }' 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:20.981 22:03:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.981 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:20.981 fio-3.35 00:23:20.981 Starting 1 thread 00:23:33.187 00:23:33.187 filename0: (groupid=0, jobs=1): err= 0: pid=97744: Wed Jul 24 22:03:37 2024 00:23:33.187 read: IOPS=9097, BW=35.5MiB/s (37.3MB/s)(355MiB/10001msec) 00:23:33.187 slat (nsec): min=5824, max=60175, avg=8134.20, stdev=3514.09 00:23:33.187 clat (usec): min=209, max=4765, avg=415.61, stdev=49.12 00:23:33.187 lat (usec): min=233, max=4793, avg=423.75, stdev=49.76 00:23:33.187 clat percentiles (usec): 00:23:33.187 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 379], 00:23:33.187 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 424], 00:23:33.187 | 70.00th=[ 437], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 486], 00:23:33.187 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 586], 00:23:33.187 | 99.99th=[ 947] 00:23:33.187 bw ( KiB/s): min=35424, max=37344, per=100.00%, avg=36430.68, stdev=504.15, samples=19 00:23:33.187 iops : min= 8856, max= 9336, avg=9107.63, stdev=126.07, samples=19 00:23:33.187 lat (usec) : 250=0.01%, 500=97.94%, 750=2.05%, 1000=0.01% 00:23:33.187 lat (msec) : 2=0.01%, 10=0.01% 00:23:33.187 cpu : usr=84.96%, sys=13.19%, ctx=23, majf=0, minf=0 00:23:33.187 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:33.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.187 issued rwts: total=90985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.187 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:33.187 00:23:33.187 Run status group 0 (all jobs): 00:23:33.187 READ: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=355MiB (373MB), run=10001-10001msec 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 00:23:33.187 real 0m10.958s 00:23:33.187 user 0m9.097s 00:23:33.187 sys 0m1.598s 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 ************************************ 00:23:33.187 END TEST fio_dif_1_default 00:23:33.187 ************************************ 00:23:33.187 22:03:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:33.187 22:03:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:33.187 22:03:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 ************************************ 00:23:33.187 START TEST fio_dif_1_multi_subsystems 00:23:33.187 ************************************ 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 bdev_null0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 [2024-07-24 22:03:37.497966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 bdev_null1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.187 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.187 { 00:23:33.187 "params": { 00:23:33.187 "name": "Nvme$subsystem", 00:23:33.187 "trtype": "$TEST_TRANSPORT", 00:23:33.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.188 "adrfam": "ipv4", 00:23:33.188 "trsvcid": "$NVMF_PORT", 00:23:33.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.188 "hdgst": ${hdgst:-false}, 00:23:33.188 "ddgst": ${ddgst:-false} 00:23:33.188 }, 00:23:33.188 "method": "bdev_nvme_attach_controller" 00:23:33.188 } 00:23:33.188 EOF 00:23:33.188 )") 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.188 { 00:23:33.188 "params": { 00:23:33.188 "name": "Nvme$subsystem", 00:23:33.188 "trtype": "$TEST_TRANSPORT", 00:23:33.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.188 "adrfam": "ipv4", 00:23:33.188 "trsvcid": "$NVMF_PORT", 00:23:33.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.188 "hdgst": ${hdgst:-false}, 00:23:33.188 "ddgst": ${ddgst:-false} 00:23:33.188 }, 00:23:33.188 "method": "bdev_nvme_attach_controller" 00:23:33.188 } 00:23:33.188 EOF 00:23:33.188 )") 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:33.188 "params": { 00:23:33.188 "name": "Nvme0", 00:23:33.188 "trtype": "tcp", 00:23:33.188 "traddr": "10.0.0.2", 00:23:33.188 "adrfam": "ipv4", 00:23:33.188 "trsvcid": "4420", 00:23:33.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:33.188 "hdgst": false, 00:23:33.188 "ddgst": false 00:23:33.188 }, 00:23:33.188 "method": "bdev_nvme_attach_controller" 00:23:33.188 },{ 00:23:33.188 "params": { 00:23:33.188 "name": "Nvme1", 00:23:33.188 "trtype": "tcp", 00:23:33.188 "traddr": "10.0.0.2", 00:23:33.188 "adrfam": "ipv4", 00:23:33.188 "trsvcid": "4420", 00:23:33.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.188 "hdgst": false, 00:23:33.188 "ddgst": false 00:23:33.188 }, 00:23:33.188 "method": "bdev_nvme_attach_controller" 00:23:33.188 }' 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:33.188 22:03:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:33.188 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:33.188 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:33.188 fio-3.35 00:23:33.188 Starting 2 threads 00:23:43.161 00:23:43.161 filename0: (groupid=0, jobs=1): err= 0: pid=97903: Wed Jul 24 22:03:48 2024 00:23:43.161 read: IOPS=4835, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:23:43.161 slat (nsec): min=4769, max=99822, avg=15840.92, stdev=7967.72 00:23:43.161 clat (usec): min=568, max=5699, avg=784.54, stdev=87.47 00:23:43.161 lat (usec): min=581, max=5718, avg=800.38, stdev=90.38 00:23:43.161 clat percentiles (usec): 00:23:43.161 | 1.00th=[ 635], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 717], 00:23:43.161 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 799], 00:23:43.161 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 922], 00:23:43.161 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1074], 99.95th=[ 1074], 00:23:43.161 | 99.99th=[ 1778] 00:23:43.161 bw ( KiB/s): min=17696, max=20512, per=50.23%, avg=19432.42, stdev=1015.29, samples=19 00:23:43.161 iops : min= 4424, max= 5128, avg=4858.11, stdev=253.82, samples=19 00:23:43.161 lat (usec) : 750=35.17%, 1000=64.13% 00:23:43.161 lat (msec) : 2=0.69%, 10=0.01% 00:23:43.161 cpu : usr=88.26%, sys=10.17%, ctx=110, majf=0, minf=9 00:23:43.161 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.161 issued rwts: total=48356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.161 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:43.161 filename1: (groupid=0, jobs=1): err= 0: pid=97904: Wed Jul 24 22:03:48 2024 00:23:43.161 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:23:43.161 slat (nsec): min=6388, max=94765, avg=16092.68, stdev=8347.06 00:23:43.161 clat (usec): min=430, max=4655, avg=782.71, stdev=78.20 00:23:43.161 lat (usec): min=436, max=4680, avg=798.80, stdev=81.10 00:23:43.161 clat percentiles (usec): 00:23:43.161 | 1.00th=[ 660], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 717], 00:23:43.161 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 791], 00:23:43.161 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 906], 00:23:43.161 | 99.00th=[ 971], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1074], 00:23:43.161 | 99.99th=[ 1795] 00:23:43.161 bw ( KiB/s): min=17696, max=20512, per=50.24%, avg=19436.16, stdev=1015.15, samples=19 00:23:43.161 iops : min= 4424, max= 5128, avg=4859.00, stdev=253.77, samples=19 00:23:43.161 lat (usec) : 500=0.02%, 750=35.05%, 1000=64.42% 00:23:43.161 lat (msec) : 2=0.49%, 10=0.01% 00:23:43.161 cpu : usr=89.74%, sys=8.78%, ctx=18, majf=0, minf=0 00:23:43.161 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.161 issued rwts: total=48368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.161 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:43.161 00:23:43.161 Run status group 0 (all jobs): 00:23:43.161 READ: bw=37.8MiB/s (39.6MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=378MiB (396MB), run=10001-10001msec 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.161 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 00:23:43.162 real 0m11.058s 00:23:43.162 user 0m18.512s 00:23:43.162 sys 0m2.170s 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 ************************************ 00:23:43.162 END TEST fio_dif_1_multi_subsystems 00:23:43.162 ************************************ 00:23:43.162 22:03:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:43.162 22:03:48 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:43.162 22:03:48 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 ************************************ 00:23:43.162 START TEST fio_dif_rand_params 00:23:43.162 ************************************ 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 bdev_null0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 [2024-07-24 22:03:48.613286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme0", 00:23:43.162 "trtype": "tcp", 00:23:43.162 "traddr": "10.0.0.2", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "4420", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:43.162 "hdgst": false, 00:23:43.162 "ddgst": false 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 }' 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:43.162 22:03:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:43.162 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:43.162 ... 00:23:43.162 fio-3.35 00:23:43.162 Starting 3 threads 00:23:49.728 00:23:49.728 filename0: (groupid=0, jobs=1): err= 0: pid=98060: Wed Jul 24 22:03:54 2024 00:23:49.728 read: IOPS=256, BW=32.1MiB/s (33.6MB/s)(161MiB/5002msec) 00:23:49.728 slat (nsec): min=6640, max=49560, avg=11821.62, stdev=6391.15 00:23:49.728 clat (usec): min=10598, max=19569, avg=11655.26, stdev=606.10 00:23:49.728 lat (usec): min=10605, max=19594, avg=11667.08, stdev=606.59 00:23:49.728 clat percentiles (usec): 00:23:49.728 | 1.00th=[10814], 5.00th=[10945], 10.00th=[11076], 20.00th=[11207], 00:23:49.728 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:23:49.728 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12518], 00:23:49.728 | 99.00th=[12649], 99.50th=[12780], 99.90th=[19530], 99.95th=[19530], 00:23:49.728 | 99.99th=[19530] 00:23:49.728 bw ( KiB/s): min=31488, max=33792, per=33.21%, avg=32775.33, stdev=668.64, samples=9 00:23:49.728 iops : min= 246, max= 264, avg=256.00, stdev= 5.20, samples=9 00:23:49.728 lat (msec) : 20=100.00% 00:23:49.728 cpu : usr=93.62%, sys=5.72%, ctx=10, majf=0, minf=9 00:23:49.728 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.728 issued rwts: total=1284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.728 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.728 filename0: (groupid=0, jobs=1): err= 0: pid=98061: Wed Jul 24 22:03:54 2024 00:23:49.728 read: IOPS=257, BW=32.1MiB/s (33.7MB/s)(161MiB/5004msec) 00:23:49.728 slat (nsec): min=6602, max=66018, avg=14750.19, stdev=9088.39 00:23:49.728 clat (usec): min=9241, max=13293, avg=11625.46, stdev=482.80 00:23:49.728 lat (usec): min=9249, max=13313, avg=11640.21, stdev=482.78 00:23:49.728 clat percentiles (usec): 00:23:49.728 | 1.00th=[10814], 5.00th=[10945], 10.00th=[11076], 20.00th=[11207], 00:23:49.728 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:23:49.728 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12256], 95.00th=[12518], 00:23:49.728 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13304], 99.95th=[13304], 00:23:49.728 | 99.99th=[13304] 00:23:49.728 bw ( KiB/s): min=32256, max=33090, per=33.30%, avg=32860.67, stdev=343.49, samples=9 00:23:49.728 iops : min= 252, max= 258, avg=256.67, stdev= 2.65, samples=9 00:23:49.728 lat (msec) : 10=0.23%, 20=99.77% 00:23:49.728 cpu : usr=93.82%, sys=5.46%, ctx=23, majf=0, minf=0 00:23:49.728 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.728 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.728 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.728 filename0: (groupid=0, jobs=1): err= 0: pid=98062: Wed Jul 24 22:03:54 2024 00:23:49.728 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5008msec) 00:23:49.728 slat (nsec): min=4325, max=66393, avg=13095.10, stdev=7816.43 00:23:49.728 clat (usec): min=6935, max=13215, avg=11612.21, stdev=557.61 00:23:49.728 lat (usec): min=6939, max=13235, avg=11625.30, stdev=558.16 00:23:49.728 clat percentiles (usec): 00:23:49.728 | 1.00th=[10814], 5.00th=[10945], 10.00th=[11076], 20.00th=[11207], 00:23:49.728 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:23:49.729 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12518], 00:23:49.729 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:23:49.729 | 99.99th=[13173] 00:23:49.729 bw ( KiB/s): min=32256, max=33792, per=33.39%, avg=32947.20, stdev=566.68, samples=10 00:23:49.729 iops : min= 252, max= 264, avg=257.40, stdev= 4.43, samples=10 00:23:49.729 lat (msec) : 10=0.47%, 20=99.53% 00:23:49.729 cpu : usr=94.79%, sys=4.63%, ctx=7, majf=0, minf=0 00:23:49.729 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.729 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.729 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.729 00:23:49.729 Run status group 0 (all jobs): 00:23:49.729 READ: bw=96.4MiB/s (101MB/s), 32.1MiB/s-32.2MiB/s (33.6MB/s-33.8MB/s), io=483MiB (506MB), run=5002-5008msec 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 bdev_null0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 [2024-07-24 22:03:54.563951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 bdev_null1 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 bdev_null2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.729 { 00:23:49.729 "params": { 00:23:49.729 "name": "Nvme$subsystem", 00:23:49.729 "trtype": "$TEST_TRANSPORT", 00:23:49.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.729 "adrfam": "ipv4", 00:23:49.729 "trsvcid": "$NVMF_PORT", 00:23:49.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.729 "hdgst": ${hdgst:-false}, 00:23:49.729 "ddgst": ${ddgst:-false} 00:23:49.729 }, 00:23:49.729 "method": "bdev_nvme_attach_controller" 00:23:49.729 } 00:23:49.729 EOF 00:23:49.729 )") 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:49.729 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.730 { 00:23:49.730 "params": { 00:23:49.730 "name": "Nvme$subsystem", 00:23:49.730 "trtype": "$TEST_TRANSPORT", 00:23:49.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.730 "adrfam": "ipv4", 00:23:49.730 "trsvcid": "$NVMF_PORT", 00:23:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.730 "hdgst": ${hdgst:-false}, 00:23:49.730 "ddgst": ${ddgst:-false} 00:23:49.730 }, 00:23:49.730 "method": "bdev_nvme_attach_controller" 00:23:49.730 } 00:23:49.730 EOF 00:23:49.730 )") 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.730 { 00:23:49.730 "params": { 00:23:49.730 "name": "Nvme$subsystem", 00:23:49.730 "trtype": "$TEST_TRANSPORT", 00:23:49.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.730 "adrfam": "ipv4", 00:23:49.730 "trsvcid": "$NVMF_PORT", 00:23:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.730 "hdgst": ${hdgst:-false}, 00:23:49.730 "ddgst": ${ddgst:-false} 00:23:49.730 }, 00:23:49.730 "method": "bdev_nvme_attach_controller" 00:23:49.730 } 00:23:49.730 EOF 00:23:49.730 )") 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:49.730 "params": { 00:23:49.730 "name": "Nvme0", 00:23:49.730 "trtype": "tcp", 00:23:49.730 "traddr": "10.0.0.2", 00:23:49.730 "adrfam": "ipv4", 00:23:49.730 "trsvcid": "4420", 00:23:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:49.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:49.730 "hdgst": false, 00:23:49.730 "ddgst": false 00:23:49.730 }, 00:23:49.730 "method": "bdev_nvme_attach_controller" 00:23:49.730 },{ 00:23:49.730 "params": { 00:23:49.730 "name": "Nvme1", 00:23:49.730 "trtype": "tcp", 00:23:49.730 "traddr": "10.0.0.2", 00:23:49.730 "adrfam": "ipv4", 00:23:49.730 "trsvcid": "4420", 00:23:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.730 "hdgst": false, 00:23:49.730 "ddgst": false 00:23:49.730 }, 00:23:49.730 "method": "bdev_nvme_attach_controller" 00:23:49.730 },{ 00:23:49.730 "params": { 00:23:49.730 "name": "Nvme2", 00:23:49.730 "trtype": "tcp", 00:23:49.730 "traddr": "10.0.0.2", 00:23:49.730 "adrfam": "ipv4", 00:23:49.730 "trsvcid": "4420", 00:23:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:49.730 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.730 "hdgst": false, 00:23:49.730 "ddgst": false 00:23:49.730 }, 00:23:49.730 "method": "bdev_nvme_attach_controller" 00:23:49.730 }' 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:49.730 22:03:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:49.730 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:49.730 ... 00:23:49.730 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:49.730 ... 00:23:49.730 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:49.730 ... 00:23:49.730 fio-3.35 00:23:49.730 Starting 24 threads 00:24:01.929 00:24:01.929 filename0: (groupid=0, jobs=1): err= 0: pid=98157: Wed Jul 24 22:04:05 2024 00:24:01.929 read: IOPS=219, BW=879KiB/s (900kB/s)(8828KiB/10045msec) 00:24:01.929 slat (usec): min=5, max=8037, avg=28.91, stdev=277.89 00:24:01.929 clat (msec): min=18, max=122, avg=72.59, stdev=18.90 00:24:01.929 lat (msec): min=18, max=122, avg=72.62, stdev=18.90 00:24:01.929 clat percentiles (msec): 00:24:01.929 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:24:01.929 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:24:01.929 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 100], 95.00th=[ 106], 00:24:01.929 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 121], 00:24:01.929 | 99.99th=[ 123] 00:24:01.929 bw ( KiB/s): min= 744, max= 1104, per=4.28%, avg=876.40, stdev=98.27, samples=20 00:24:01.929 iops : min= 186, max= 276, avg=219.10, stdev=24.57, samples=20 00:24:01.929 lat (msec) : 20=0.63%, 50=13.96%, 100=76.67%, 250=8.74% 00:24:01.929 cpu : usr=40.09%, sys=1.94%, ctx=1304, majf=0, minf=9 00:24:01.929 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:01.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.929 filename0: (groupid=0, jobs=1): err= 0: pid=98158: Wed Jul 24 22:04:05 2024 00:24:01.929 read: IOPS=202, BW=811KiB/s (830kB/s)(8140KiB/10039msec) 00:24:01.929 slat (usec): min=6, max=8032, avg=23.80, stdev=251.22 00:24:01.929 clat (msec): min=16, max=143, avg=78.75, stdev=21.30 00:24:01.929 lat (msec): min=16, max=143, avg=78.77, stdev=21.30 00:24:01.929 clat percentiles (msec): 00:24:01.929 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:24:01.929 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:24:01.929 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 112], 00:24:01.929 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:01.929 | 99.99th=[ 144] 00:24:01.929 bw ( KiB/s): min= 528, max= 1016, per=3.94%, avg=807.60, stdev=124.90, samples=20 00:24:01.929 iops : min= 132, max= 254, avg=201.90, stdev=31.22, samples=20 00:24:01.929 lat (msec) : 20=0.79%, 50=9.53%, 100=74.30%, 250=15.38% 00:24:01.929 cpu : usr=34.24%, sys=1.36%, ctx=916, majf=0, minf=9 00:24:01.929 IO depths : 1=0.1%, 2=2.5%, 4=10.0%, 8=72.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:01.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.929 filename0: (groupid=0, jobs=1): err= 0: pid=98159: Wed Jul 24 22:04:05 2024 00:24:01.929 read: IOPS=213, BW=856KiB/s (876kB/s)(8576KiB/10021msec) 00:24:01.929 slat (usec): min=7, max=8022, avg=22.21, stdev=193.60 00:24:01.929 clat (msec): min=26, max=181, avg=74.64, stdev=21.13 00:24:01.929 lat (msec): min=26, max=181, avg=74.66, stdev=21.13 00:24:01.929 clat percentiles (msec): 00:24:01.929 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:24:01.929 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:24:01.929 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 110], 00:24:01.929 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 182], 00:24:01.929 | 99.99th=[ 182] 00:24:01.929 bw ( KiB/s): min= 528, max= 1072, per=4.17%, avg=853.60, stdev=132.48, samples=20 00:24:01.929 iops : min= 132, max= 268, avg=213.40, stdev=33.12, samples=20 00:24:01.929 lat (msec) : 50=15.44%, 100=72.90%, 250=11.66% 00:24:01.929 cpu : usr=33.56%, sys=1.20%, ctx=909, majf=0, minf=10 00:24:01.929 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:01.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.929 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.929 filename0: (groupid=0, jobs=1): err= 0: pid=98160: Wed Jul 24 22:04:05 2024 00:24:01.929 read: IOPS=219, BW=880KiB/s (901kB/s)(8808KiB/10014msec) 00:24:01.929 slat (usec): min=4, max=8053, avg=32.46, stdev=247.58 00:24:01.929 clat (msec): min=18, max=247, avg=72.55, stdev=23.74 00:24:01.929 lat (msec): min=18, max=247, avg=72.59, stdev=23.74 00:24:01.929 clat percentiles (msec): 00:24:01.929 | 1.00th=[ 28], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:24:01.929 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 75], 00:24:01.929 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:24:01.929 | 99.00th=[ 132], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 247], 00:24:01.929 | 99.99th=[ 247] 00:24:01.929 bw ( KiB/s): min= 528, max= 1024, per=4.24%, avg=868.05, stdev=160.91, samples=19 00:24:01.929 iops : min= 132, max= 256, avg=217.00, stdev=40.25, samples=19 00:24:01.929 lat (msec) : 20=0.27%, 50=18.03%, 100=69.85%, 250=11.85% 00:24:01.930 cpu : usr=39.41%, sys=1.69%, ctx=1234, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename0: (groupid=0, jobs=1): err= 0: pid=98161: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=225, BW=900KiB/s (922kB/s)(9020KiB/10019msec) 00:24:01.930 slat (usec): min=3, max=9033, avg=36.90, stdev=348.52 00:24:01.930 clat (msec): min=19, max=199, avg=70.94, stdev=21.61 00:24:01.930 lat (msec): min=19, max=199, avg=70.98, stdev=21.61 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53], 00:24:01.930 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 74], 00:24:01.930 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 107], 00:24:01.930 | 99.00th=[ 118], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 201], 00:24:01.930 | 99.99th=[ 201] 00:24:01.930 bw ( KiB/s): min= 656, max= 1072, per=4.35%, avg=890.11, stdev=106.96, samples=19 00:24:01.930 iops : min= 164, max= 268, avg=222.53, stdev=26.74, samples=19 00:24:01.930 lat (msec) : 20=0.27%, 50=15.61%, 100=74.68%, 250=9.45% 00:24:01.930 cpu : usr=41.71%, sys=1.89%, ctx=1510, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename0: (groupid=0, jobs=1): err= 0: pid=98162: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=200, BW=804KiB/s (823kB/s)(8048KiB/10016msec) 00:24:01.930 slat (usec): min=4, max=8048, avg=40.77, stdev=399.41 00:24:01.930 clat (msec): min=21, max=194, avg=79.34, stdev=23.09 00:24:01.930 lat (msec): min=21, max=194, avg=79.39, stdev=23.10 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:24:01.930 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 85], 00:24:01.930 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 113], 00:24:01.930 | 99.00th=[ 132], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 194], 00:24:01.930 | 99.99th=[ 194] 00:24:01.930 bw ( KiB/s): min= 528, max= 1024, per=3.85%, avg=789.05, stdev=147.54, samples=19 00:24:01.930 iops : min= 132, max= 256, avg=197.26, stdev=36.89, samples=19 00:24:01.930 lat (msec) : 50=12.23%, 100=70.63%, 250=17.15% 00:24:01.930 cpu : usr=31.98%, sys=1.35%, ctx=884, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=3.1%, 4=12.3%, 8=70.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=90.4%, 8=6.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename0: (groupid=0, jobs=1): err= 0: pid=98163: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=213, BW=856KiB/s (876kB/s)(8604KiB/10055msec) 00:24:01.930 slat (usec): min=4, max=8038, avg=26.63, stdev=299.36 00:24:01.930 clat (msec): min=5, max=150, avg=74.56, stdev=24.36 00:24:01.930 lat (msec): min=5, max=150, avg=74.58, stdev=24.36 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 60], 00:24:01.930 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:24:01.930 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 110], 00:24:01.930 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 150], 00:24:01.930 | 99.99th=[ 150] 00:24:01.930 bw ( KiB/s): min= 616, max= 1520, per=4.17%, avg=853.85, stdev=193.78, samples=20 00:24:01.930 iops : min= 154, max= 380, avg=213.45, stdev=48.46, samples=20 00:24:01.930 lat (msec) : 10=3.72%, 20=0.65%, 50=9.02%, 100=70.39%, 250=16.23% 00:24:01.930 cpu : usr=35.57%, sys=1.39%, ctx=1032, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=76.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename0: (groupid=0, jobs=1): err= 0: pid=98164: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=224, BW=898KiB/s (920kB/s)(8996KiB/10018msec) 00:24:01.930 slat (usec): min=7, max=11068, avg=36.30, stdev=372.98 00:24:01.930 clat (msec): min=18, max=201, avg=71.11, stdev=21.42 00:24:01.930 lat (msec): min=18, max=201, avg=71.14, stdev=21.42 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 53], 00:24:01.930 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:24:01.930 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 102], 95.00th=[ 106], 00:24:01.930 | 99.00th=[ 114], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 201], 00:24:01.930 | 99.99th=[ 203] 00:24:01.930 bw ( KiB/s): min= 633, max= 1072, per=4.34%, avg=888.05, stdev=112.29, samples=19 00:24:01.930 iops : min= 158, max= 268, avg=222.00, stdev=28.10, samples=19 00:24:01.930 lat (msec) : 20=0.27%, 50=16.67%, 100=72.12%, 250=10.94% 00:24:01.930 cpu : usr=32.03%, sys=1.31%, ctx=895, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename1: (groupid=0, jobs=1): err= 0: pid=98165: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=204, BW=819KiB/s (839kB/s)(8228KiB/10045msec) 00:24:01.930 slat (usec): min=3, max=8033, avg=28.42, stdev=254.15 00:24:01.930 clat (msec): min=7, max=145, avg=77.92, stdev=22.60 00:24:01.930 lat (msec): min=7, max=145, avg=77.95, stdev=22.61 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 64], 00:24:01.930 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:24:01.930 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 111], 00:24:01.930 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 146], 00:24:01.930 | 99.99th=[ 146] 00:24:01.930 bw ( KiB/s): min= 524, max= 1264, per=3.98%, avg=816.20, stdev=156.03, samples=20 00:24:01.930 iops : min= 131, max= 316, avg=204.05, stdev=39.01, samples=20 00:24:01.930 lat (msec) : 10=1.56%, 20=1.46%, 50=6.51%, 100=74.53%, 250=15.95% 00:24:01.930 cpu : usr=41.73%, sys=1.83%, ctx=1277, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=3.4%, 4=13.5%, 8=68.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=91.2%, 8=5.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename1: (groupid=0, jobs=1): err= 0: pid=98166: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=215, BW=862KiB/s (882kB/s)(8636KiB/10021msec) 00:24:01.930 slat (usec): min=7, max=5032, avg=26.91, stdev=204.19 00:24:01.930 clat (msec): min=22, max=174, avg=74.10, stdev=24.29 00:24:01.930 lat (msec): min=22, max=174, avg=74.13, stdev=24.28 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:24:01.930 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:24:01.930 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:24:01.930 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 176], 00:24:01.930 | 99.99th=[ 176] 00:24:01.930 bw ( KiB/s): min= 400, max= 1048, per=4.17%, avg=853.26, stdev=173.04, samples=19 00:24:01.930 iops : min= 100, max= 262, avg=213.32, stdev=43.26, samples=19 00:24:01.930 lat (msec) : 50=17.14%, 100=68.50%, 250=14.36% 00:24:01.930 cpu : usr=40.86%, sys=1.63%, ctx=1213, majf=0, minf=9 00:24:01.930 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:01.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.930 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.930 filename1: (groupid=0, jobs=1): err= 0: pid=98167: Wed Jul 24 22:04:05 2024 00:24:01.930 read: IOPS=224, BW=899KiB/s (921kB/s)(9004KiB/10016msec) 00:24:01.930 slat (usec): min=3, max=8040, avg=40.15, stdev=363.91 00:24:01.930 clat (msec): min=26, max=201, avg=71.03, stdev=20.93 00:24:01.930 lat (msec): min=26, max=201, avg=71.07, stdev=20.93 00:24:01.930 clat percentiles (msec): 00:24:01.930 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:24:01.930 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:24:01.930 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 106], 00:24:01.930 | 99.00th=[ 118], 99.50th=[ 178], 99.90th=[ 178], 99.95th=[ 203], 00:24:01.930 | 99.99th=[ 203] 00:24:01.930 bw ( KiB/s): min= 619, max= 1048, per=4.36%, avg=892.37, stdev=115.45, samples=19 00:24:01.931 iops : min= 154, max= 262, avg=223.05, stdev=28.96, samples=19 00:24:01.931 lat (msec) : 50=18.66%, 100=73.70%, 250=7.64% 00:24:01.931 cpu : usr=38.98%, sys=1.67%, ctx=1132, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=82.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename1: (groupid=0, jobs=1): err= 0: pid=98168: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=203, BW=812KiB/s (832kB/s)(8140KiB/10024msec) 00:24:01.931 slat (usec): min=3, max=8042, avg=31.60, stdev=320.72 00:24:01.931 clat (msec): min=26, max=172, avg=78.60, stdev=22.93 00:24:01.931 lat (msec): min=26, max=172, avg=78.63, stdev=22.92 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 59], 00:24:01.931 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 83], 00:24:01.931 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 120], 00:24:01.931 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 174], 00:24:01.931 | 99.99th=[ 174] 00:24:01.931 bw ( KiB/s): min= 528, max= 1000, per=3.95%, avg=809.85, stdev=145.69, samples=20 00:24:01.931 iops : min= 132, max= 250, avg=202.45, stdev=36.43, samples=20 00:24:01.931 lat (msec) : 50=12.29%, 100=69.14%, 250=18.57% 00:24:01.931 cpu : usr=36.63%, sys=1.54%, ctx=1013, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=72.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=89.9%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename1: (groupid=0, jobs=1): err= 0: pid=98169: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=215, BW=861KiB/s (882kB/s)(8648KiB/10039msec) 00:24:01.931 slat (usec): min=3, max=8047, avg=38.21, stdev=375.29 00:24:01.931 clat (msec): min=8, max=132, avg=74.04, stdev=20.81 00:24:01.931 lat (msec): min=8, max=132, avg=74.08, stdev=20.81 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 58], 00:24:01.931 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:24:01.931 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 107], 00:24:01.931 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 126], 00:24:01.931 | 99.99th=[ 132] 00:24:01.931 bw ( KiB/s): min= 656, max= 1088, per=4.20%, avg=860.50, stdev=108.40, samples=20 00:24:01.931 iops : min= 164, max= 272, avg=215.10, stdev=27.09, samples=20 00:24:01.931 lat (msec) : 10=0.65%, 20=1.76%, 50=10.36%, 100=75.44%, 250=11.79% 00:24:01.931 cpu : usr=32.31%, sys=1.25%, ctx=873, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename1: (groupid=0, jobs=1): err= 0: pid=98170: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=206, BW=828KiB/s (848kB/s)(8312KiB/10040msec) 00:24:01.931 slat (usec): min=6, max=4046, avg=21.44, stdev=128.68 00:24:01.931 clat (msec): min=35, max=140, avg=77.13, stdev=20.42 00:24:01.931 lat (msec): min=35, max=140, avg=77.15, stdev=20.43 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:24:01.931 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:24:01.931 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 110], 00:24:01.931 | 99.00th=[ 127], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 142], 00:24:01.931 | 99.99th=[ 142] 00:24:01.931 bw ( KiB/s): min= 528, max= 992, per=4.02%, avg=824.70, stdev=135.76, samples=20 00:24:01.931 iops : min= 132, max= 248, avg=206.15, stdev=33.98, samples=20 00:24:01.931 lat (msec) : 50=13.96%, 100=70.16%, 250=15.88% 00:24:01.931 cpu : usr=36.93%, sys=1.52%, ctx=1039, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=73.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=89.7%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename1: (groupid=0, jobs=1): err= 0: pid=98171: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=204, BW=820KiB/s (839kB/s)(8216KiB/10024msec) 00:24:01.931 slat (usec): min=3, max=8039, avg=29.33, stdev=306.48 00:24:01.931 clat (msec): min=28, max=173, avg=77.88, stdev=19.94 00:24:01.931 lat (msec): min=28, max=173, avg=77.91, stdev=19.95 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:24:01.931 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:24:01.931 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 109], 00:24:01.931 | 99.00th=[ 120], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 174], 00:24:01.931 | 99.99th=[ 174] 00:24:01.931 bw ( KiB/s): min= 640, max= 1000, per=3.99%, avg=817.40, stdev=124.74, samples=20 00:24:01.931 iops : min= 160, max= 250, avg=204.35, stdev=31.19, samples=20 00:24:01.931 lat (msec) : 50=11.30%, 100=72.64%, 250=16.07% 00:24:01.931 cpu : usr=35.37%, sys=1.29%, ctx=981, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.4%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=90.1%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename1: (groupid=0, jobs=1): err= 0: pid=98172: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=220, BW=881KiB/s (902kB/s)(8872KiB/10070msec) 00:24:01.931 slat (usec): min=5, max=8030, avg=21.74, stdev=190.53 00:24:01.931 clat (usec): min=1715, max=141962, avg=72398.35, stdev=22470.11 00:24:01.931 lat (usec): min=1727, max=141982, avg=72420.10, stdev=22469.56 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 8], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:24:01.931 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:24:01.931 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 108], 00:24:01.931 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 134], 00:24:01.931 | 99.99th=[ 142] 00:24:01.931 bw ( KiB/s): min= 664, max= 1392, per=4.30%, avg=880.65, stdev=162.55, samples=20 00:24:01.931 iops : min= 166, max= 348, avg=220.15, stdev=40.65, samples=20 00:24:01.931 lat (msec) : 2=0.09%, 10=1.98%, 20=1.44%, 50=12.35%, 100=72.86% 00:24:01.931 lat (msec) : 250=11.27% 00:24:01.931 cpu : usr=39.54%, sys=1.65%, ctx=1212, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename2: (groupid=0, jobs=1): err= 0: pid=98173: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=228, BW=914KiB/s (936kB/s)(9156KiB/10017msec) 00:24:01.931 slat (usec): min=4, max=8033, avg=34.17, stdev=301.23 00:24:01.931 clat (msec): min=16, max=260, avg=69.88, stdev=24.48 00:24:01.931 lat (msec): min=16, max=260, avg=69.91, stdev=24.49 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:24:01.931 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 73], 00:24:01.931 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 105], 00:24:01.931 | 99.00th=[ 114], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 262], 00:24:01.931 | 99.99th=[ 262] 00:24:01.931 bw ( KiB/s): min= 553, max= 1072, per=4.41%, avg=903.21, stdev=127.87, samples=19 00:24:01.931 iops : min= 138, max= 268, avg=225.79, stdev=32.01, samples=19 00:24:01.931 lat (msec) : 20=0.44%, 50=19.00%, 100=71.60%, 250=8.87%, 500=0.09% 00:24:01.931 cpu : usr=42.13%, sys=1.82%, ctx=1215, majf=0, minf=9 00:24:01.931 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:01.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.931 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.931 filename2: (groupid=0, jobs=1): err= 0: pid=98174: Wed Jul 24 22:04:05 2024 00:24:01.931 read: IOPS=218, BW=876KiB/s (897kB/s)(8792KiB/10040msec) 00:24:01.931 slat (usec): min=4, max=8034, avg=29.63, stdev=293.80 00:24:01.931 clat (msec): min=18, max=131, avg=72.87, stdev=18.95 00:24:01.931 lat (msec): min=18, max=131, avg=72.90, stdev=18.94 00:24:01.931 clat percentiles (msec): 00:24:01.931 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:24:01.931 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:24:01.931 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 99], 95.00th=[ 107], 00:24:01.932 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 120], 00:24:01.932 | 99.99th=[ 132] 00:24:01.932 bw ( KiB/s): min= 744, max= 1104, per=4.26%, avg=872.80, stdev=89.83, samples=20 00:24:01.932 iops : min= 186, max= 276, avg=218.20, stdev=22.46, samples=20 00:24:01.932 lat (msec) : 20=0.73%, 50=13.97%, 100=76.62%, 250=8.69% 00:24:01.932 cpu : usr=36.03%, sys=1.37%, ctx=1066, majf=0, minf=9 00:24:01.932 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98175: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=211, BW=844KiB/s (864kB/s)(8456KiB/10018msec) 00:24:01.932 slat (usec): min=3, max=8036, avg=37.82, stdev=314.89 00:24:01.932 clat (msec): min=22, max=198, avg=75.56, stdev=22.54 00:24:01.932 lat (msec): min=22, max=198, avg=75.60, stdev=22.55 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:24:01.932 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:24:01.932 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 109], 00:24:01.932 | 99.00th=[ 130], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 199], 00:24:01.932 | 99.99th=[ 199] 00:24:01.932 bw ( KiB/s): min= 528, max= 1024, per=4.07%, avg=833.26, stdev=155.29, samples=19 00:24:01.932 iops : min= 132, max= 256, avg=208.32, stdev=38.82, samples=19 00:24:01.932 lat (msec) : 50=13.34%, 100=73.08%, 250=13.58% 00:24:01.932 cpu : usr=42.64%, sys=1.71%, ctx=1679, majf=0, minf=9 00:24:01.932 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=73.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98176: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=218, BW=875KiB/s (896kB/s)(8812KiB/10071msec) 00:24:01.932 slat (usec): min=4, max=8023, avg=19.29, stdev=170.84 00:24:01.932 clat (msec): min=3, max=144, avg=72.91, stdev=25.05 00:24:01.932 lat (msec): min=3, max=144, avg=72.93, stdev=25.05 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 57], 00:24:01.932 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:24:01.932 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 109], 00:24:01.932 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 146], 00:24:01.932 | 99.99th=[ 146] 00:24:01.932 bw ( KiB/s): min= 592, max= 1648, per=4.27%, avg=874.50, stdev=225.81, samples=20 00:24:01.932 iops : min= 148, max= 412, avg=218.60, stdev=56.48, samples=20 00:24:01.932 lat (msec) : 4=0.73%, 10=3.63%, 20=0.64%, 50=13.16%, 100=67.36% 00:24:01.932 lat (msec) : 250=14.48% 00:24:01.932 cpu : usr=33.26%, sys=1.51%, ctx=1217, majf=0, minf=9 00:24:01.932 IO depths : 1=0.2%, 2=1.8%, 4=6.5%, 8=76.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98177: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=221, BW=888KiB/s (909kB/s)(8892KiB/10018msec) 00:24:01.932 slat (usec): min=7, max=8030, avg=23.11, stdev=200.85 00:24:01.932 clat (msec): min=18, max=198, avg=72.00, stdev=21.52 00:24:01.932 lat (msec): min=18, max=198, avg=72.02, stdev=21.52 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:24:01.932 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:24:01.932 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 107], 00:24:01.932 | 99.00th=[ 121], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 199], 00:24:01.932 | 99.99th=[ 199] 00:24:01.932 bw ( KiB/s): min= 681, max= 1048, per=4.28%, avg=877.11, stdev=102.59, samples=19 00:24:01.932 iops : min= 170, max= 262, avg=219.26, stdev=25.67, samples=19 00:24:01.932 lat (msec) : 20=0.27%, 50=17.09%, 100=72.11%, 250=10.53% 00:24:01.932 cpu : usr=39.47%, sys=1.58%, ctx=1241, majf=0, minf=9 00:24:01.932 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98178: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=209, BW=838KiB/s (858kB/s)(8408KiB/10034msec) 00:24:01.932 slat (usec): min=7, max=10995, avg=35.12, stdev=342.65 00:24:01.932 clat (msec): min=33, max=147, avg=76.13, stdev=21.74 00:24:01.932 lat (msec): min=33, max=147, avg=76.17, stdev=21.73 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 57], 00:24:01.932 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:24:01.932 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 112], 00:24:01.932 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:24:01.932 | 99.99th=[ 148] 00:24:01.932 bw ( KiB/s): min= 512, max= 1072, per=4.08%, avg=836.40, stdev=151.67, samples=20 00:24:01.932 iops : min= 128, max= 268, avg=209.10, stdev=37.92, samples=20 00:24:01.932 lat (msec) : 50=12.61%, 100=72.26%, 250=15.13% 00:24:01.932 cpu : usr=40.78%, sys=1.82%, ctx=1245, majf=0, minf=0 00:24:01.932 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98179: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=206, BW=824KiB/s (844kB/s)(8268KiB/10030msec) 00:24:01.932 slat (usec): min=4, max=5024, avg=22.06, stdev=141.27 00:24:01.932 clat (msec): min=35, max=143, avg=77.46, stdev=20.03 00:24:01.932 lat (msec): min=35, max=143, avg=77.48, stdev=20.02 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 59], 00:24:01.932 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:24:01.932 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 110], 00:24:01.932 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:01.932 | 99.99th=[ 144] 00:24:01.932 bw ( KiB/s): min= 624, max= 1000, per=4.02%, avg=823.10, stdev=119.19, samples=20 00:24:01.932 iops : min= 156, max= 250, avg=205.75, stdev=29.81, samples=20 00:24:01.932 lat (msec) : 50=10.30%, 100=76.29%, 250=13.40% 00:24:01.932 cpu : usr=39.64%, sys=1.72%, ctx=1286, majf=0, minf=9 00:24:01.932 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 filename2: (groupid=0, jobs=1): err= 0: pid=98180: Wed Jul 24 22:04:05 2024 00:24:01.932 read: IOPS=209, BW=836KiB/s (856kB/s)(8380KiB/10019msec) 00:24:01.932 slat (usec): min=4, max=8027, avg=34.97, stdev=360.58 00:24:01.932 clat (msec): min=27, max=179, avg=76.27, stdev=22.89 00:24:01.932 lat (msec): min=27, max=179, avg=76.30, stdev=22.90 00:24:01.932 clat percentiles (msec): 00:24:01.932 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:24:01.932 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:24:01.932 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 114], 00:24:01.932 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 180], 00:24:01.932 | 99.99th=[ 180] 00:24:01.932 bw ( KiB/s): min= 512, max= 1024, per=4.08%, avg=835.00, stdev=161.54, samples=20 00:24:01.932 iops : min= 128, max= 256, avg=208.75, stdev=40.39, samples=20 00:24:01.932 lat (msec) : 50=15.80%, 100=67.30%, 250=16.90% 00:24:01.932 cpu : usr=35.05%, sys=1.49%, ctx=1024, majf=0, minf=9 00:24:01.932 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:01.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 complete : 0=0.0%, 4=89.2%, 8=9.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.932 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:01.932 00:24:01.932 Run status group 0 (all jobs): 00:24:01.932 READ: bw=20.0MiB/s (21.0MB/s), 804KiB/s-914KiB/s (823kB/s-936kB/s), io=201MiB (211MB), run=10014-10071msec 00:24:01.932 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:01.932 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:01.932 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.932 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 bdev_null0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 [2024-07-24 22:04:05.877060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 bdev_null1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.933 { 00:24:01.933 "params": { 00:24:01.933 "name": "Nvme$subsystem", 00:24:01.933 "trtype": "$TEST_TRANSPORT", 00:24:01.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.933 "adrfam": "ipv4", 00:24:01.933 "trsvcid": "$NVMF_PORT", 00:24:01.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.933 "hdgst": ${hdgst:-false}, 00:24:01.933 "ddgst": ${ddgst:-false} 00:24:01.933 }, 00:24:01.933 "method": "bdev_nvme_attach_controller" 00:24:01.933 } 00:24:01.933 EOF 00:24:01.933 )") 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.933 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.933 { 00:24:01.933 "params": { 00:24:01.933 "name": "Nvme$subsystem", 00:24:01.933 "trtype": "$TEST_TRANSPORT", 00:24:01.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.933 "adrfam": "ipv4", 00:24:01.934 "trsvcid": "$NVMF_PORT", 00:24:01.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.934 "hdgst": ${hdgst:-false}, 00:24:01.934 "ddgst": ${ddgst:-false} 00:24:01.934 }, 00:24:01.934 "method": "bdev_nvme_attach_controller" 00:24:01.934 } 00:24:01.934 EOF 00:24:01.934 )") 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.934 "params": { 00:24:01.934 "name": "Nvme0", 00:24:01.934 "trtype": "tcp", 00:24:01.934 "traddr": "10.0.0.2", 00:24:01.934 "adrfam": "ipv4", 00:24:01.934 "trsvcid": "4420", 00:24:01.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.934 "hdgst": false, 00:24:01.934 "ddgst": false 00:24:01.934 }, 00:24:01.934 "method": "bdev_nvme_attach_controller" 00:24:01.934 },{ 00:24:01.934 "params": { 00:24:01.934 "name": "Nvme1", 00:24:01.934 "trtype": "tcp", 00:24:01.934 "traddr": "10.0.0.2", 00:24:01.934 "adrfam": "ipv4", 00:24:01.934 "trsvcid": "4420", 00:24:01.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.934 "hdgst": false, 00:24:01.934 "ddgst": false 00:24:01.934 }, 00:24:01.934 "method": "bdev_nvme_attach_controller" 00:24:01.934 }' 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:01.934 22:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.934 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:01.934 ... 00:24:01.934 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:01.934 ... 00:24:01.934 fio-3.35 00:24:01.934 Starting 4 threads 00:24:06.153 00:24:06.153 filename0: (groupid=0, jobs=1): err= 0: pid=98307: Wed Jul 24 22:04:11 2024 00:24:06.153 read: IOPS=2097, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5002msec) 00:24:06.153 slat (usec): min=3, max=125, avg=14.93, stdev= 8.39 00:24:06.153 clat (usec): min=685, max=11161, avg=3766.71, stdev=906.84 00:24:06.153 lat (usec): min=692, max=11170, avg=3781.64, stdev=906.91 00:24:06.153 clat percentiles (usec): 00:24:06.153 | 1.00th=[ 1237], 5.00th=[ 1975], 10.00th=[ 2409], 20.00th=[ 3195], 00:24:06.153 | 30.00th=[ 3621], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3949], 00:24:06.153 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4948], 00:24:06.153 | 99.00th=[ 5735], 99.50th=[ 6587], 99.90th=[ 7963], 99.95th=[ 8356], 00:24:06.153 | 99.99th=[11076] 00:24:06.153 bw ( KiB/s): min=14336, max=17488, per=24.98%, avg=16506.67, stdev=958.27, samples=9 00:24:06.153 iops : min= 1792, max= 2186, avg=2063.33, stdev=119.78, samples=9 00:24:06.153 lat (usec) : 750=0.06%, 1000=0.36% 00:24:06.153 lat (msec) : 2=4.76%, 4=56.88%, 10=37.92%, 20=0.02% 00:24:06.153 cpu : usr=92.48%, sys=6.46%, ctx=40, majf=0, minf=0 00:24:06.153 IO depths : 1=0.1%, 2=12.1%, 4=58.1%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.153 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.153 issued rwts: total=10494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.153 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:06.153 filename0: (groupid=0, jobs=1): err= 0: pid=98308: Wed Jul 24 22:04:11 2024 00:24:06.153 read: IOPS=2035, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5003msec) 00:24:06.153 slat (nsec): min=3905, max=89380, avg=18585.16, stdev=9137.01 00:24:06.153 clat (usec): min=1004, max=16020, avg=3869.57, stdev=970.94 00:24:06.153 lat (usec): min=1016, max=16041, avg=3888.16, stdev=970.71 00:24:06.153 clat percentiles (usec): 00:24:06.153 | 1.00th=[ 1631], 5.00th=[ 1975], 10.00th=[ 2474], 20.00th=[ 3228], 00:24:06.153 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4047], 00:24:06.153 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5211], 00:24:06.153 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 9110], 99.95th=[12125], 00:24:06.153 | 99.99th=[12125] 00:24:06.153 bw ( KiB/s): min=14384, max=19712, per=24.80%, avg=16384.00, stdev=1472.57, samples=9 00:24:06.153 iops : min= 1798, max= 2464, avg=2048.00, stdev=184.07, samples=9 00:24:06.153 lat (msec) : 2=5.67%, 4=52.68%, 10=41.58%, 20=0.08% 00:24:06.153 cpu : usr=92.14%, sys=6.90%, ctx=5, majf=0, minf=9 00:24:06.153 IO depths : 1=0.1%, 2=13.1%, 4=57.1%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.153 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.153 issued rwts: total=10183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:06.154 filename1: (groupid=0, jobs=1): err= 0: pid=98309: Wed Jul 24 22:04:11 2024 00:24:06.154 read: IOPS=2118, BW=16.6MiB/s (17.4MB/s)(82.8MiB/5002msec) 00:24:06.154 slat (nsec): min=3839, max=88756, avg=15988.85, stdev=9062.58 00:24:06.154 clat (usec): min=877, max=11121, avg=3726.89, stdev=945.27 00:24:06.154 lat (usec): min=886, max=11128, avg=3742.88, stdev=946.12 00:24:06.154 clat percentiles (usec): 00:24:06.154 | 1.00th=[ 1598], 5.00th=[ 1926], 10.00th=[ 2212], 20.00th=[ 2900], 00:24:06.154 | 30.00th=[ 3556], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 3916], 00:24:06.154 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5014], 00:24:06.154 | 99.00th=[ 5800], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 8356], 00:24:06.154 | 99.99th=[11076] 00:24:06.154 bw ( KiB/s): min=14768, max=19712, per=25.92%, avg=17127.11, stdev=1459.10, samples=9 00:24:06.154 iops : min= 1846, max= 2464, avg=2140.89, stdev=182.39, samples=9 00:24:06.154 lat (usec) : 1000=0.10% 00:24:06.154 lat (msec) : 2=6.35%, 4=57.15%, 10=36.38%, 20=0.02% 00:24:06.154 cpu : usr=92.34%, sys=6.74%, ctx=25, majf=0, minf=0 00:24:06.154 IO depths : 1=0.1%, 2=10.4%, 4=58.8%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.154 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.154 issued rwts: total=10597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:06.154 filename1: (groupid=0, jobs=1): err= 0: pid=98310: Wed Jul 24 22:04:11 2024 00:24:06.154 read: IOPS=2008, BW=15.7MiB/s (16.5MB/s)(78.5MiB/5002msec) 00:24:06.154 slat (nsec): min=3965, max=90334, avg=17405.01, stdev=9562.90 00:24:06.154 clat (usec): min=1230, max=11101, avg=3927.55, stdev=833.03 00:24:06.154 lat (usec): min=1237, max=11109, avg=3944.95, stdev=833.91 00:24:06.154 clat percentiles (usec): 00:24:06.154 | 1.00th=[ 1696], 5.00th=[ 2212], 10.00th=[ 2737], 20.00th=[ 3556], 00:24:06.154 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4146], 00:24:06.154 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4817], 95.00th=[ 5014], 00:24:06.154 | 99.00th=[ 5669], 99.50th=[ 6652], 99.90th=[ 8029], 99.95th=[ 8356], 00:24:06.154 | 99.99th=[11076] 00:24:06.154 bw ( KiB/s): min=14336, max=17392, per=24.32%, avg=16071.11, stdev=900.95, samples=9 00:24:06.154 iops : min= 1792, max= 2174, avg=2008.89, stdev=112.62, samples=9 00:24:06.154 lat (msec) : 2=2.79%, 4=52.89%, 10=44.30%, 20=0.02% 00:24:06.154 cpu : usr=92.56%, sys=6.50%, ctx=336, majf=0, minf=9 00:24:06.154 IO depths : 1=0.1%, 2=14.3%, 4=56.6%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.154 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.154 issued rwts: total=10045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:06.154 00:24:06.154 Run status group 0 (all jobs): 00:24:06.154 READ: bw=64.5MiB/s (67.7MB/s), 15.7MiB/s-16.6MiB/s (16.5MB/s-17.4MB/s), io=323MiB (338MB), run=5002-5003msec 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 00:24:06.412 real 0m23.351s 00:24:06.412 user 2m5.124s 00:24:06.412 sys 0m6.742s 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:06.412 ************************************ 00:24:06.412 END TEST fio_dif_rand_params 00:24:06.412 ************************************ 00:24:06.412 22:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:06.412 22:04:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:06.412 22:04:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 ************************************ 00:24:06.412 START TEST fio_dif_digest 00:24:06.412 ************************************ 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 bdev_null0 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.412 [2024-07-24 22:04:12.023250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:06.412 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:06.413 { 00:24:06.413 "params": { 00:24:06.413 "name": "Nvme$subsystem", 00:24:06.413 "trtype": "$TEST_TRANSPORT", 00:24:06.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.413 "adrfam": "ipv4", 00:24:06.413 "trsvcid": "$NVMF_PORT", 00:24:06.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.413 "hdgst": ${hdgst:-false}, 00:24:06.413 "ddgst": ${ddgst:-false} 00:24:06.413 }, 00:24:06.413 "method": "bdev_nvme_attach_controller" 00:24:06.413 } 00:24:06.413 EOF 00:24:06.413 )") 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:06.413 "params": { 00:24:06.413 "name": "Nvme0", 00:24:06.413 "trtype": "tcp", 00:24:06.413 "traddr": "10.0.0.2", 00:24:06.413 "adrfam": "ipv4", 00:24:06.413 "trsvcid": "4420", 00:24:06.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.413 "hdgst": true, 00:24:06.413 "ddgst": true 00:24:06.413 }, 00:24:06.413 "method": "bdev_nvme_attach_controller" 00:24:06.413 }' 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:06.413 22:04:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.670 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:06.670 ... 00:24:06.670 fio-3.35 00:24:06.670 Starting 3 threads 00:24:18.902 00:24:18.902 filename0: (groupid=0, jobs=1): err= 0: pid=98418: Wed Jul 24 22:04:22 2024 00:24:18.902 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10009msec) 00:24:18.902 slat (usec): min=4, max=108, avg=28.18, stdev=15.20 00:24:18.902 clat (usec): min=11707, max=18616, avg=13409.29, stdev=729.39 00:24:18.902 lat (usec): min=11722, max=18632, avg=13437.46, stdev=729.66 00:24:18.902 clat percentiles (usec): 00:24:18.902 | 1.00th=[12125], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:24:18.902 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:24:18.902 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:24:18.902 | 99.00th=[15008], 99.50th=[16450], 99.90th=[18482], 99.95th=[18482], 00:24:18.902 | 99.99th=[18744] 00:24:18.902 bw ( KiB/s): min=27648, max=29952, per=33.32%, avg=28495.60, stdev=604.99, samples=20 00:24:18.902 iops : min= 216, max= 234, avg=222.60, stdev= 4.73, samples=20 00:24:18.902 lat (msec) : 20=100.00% 00:24:18.902 cpu : usr=95.04%, sys=4.41%, ctx=16, majf=0, minf=0 00:24:18.902 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:18.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.902 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:18.902 filename0: (groupid=0, jobs=1): err= 0: pid=98419: Wed Jul 24 22:04:22 2024 00:24:18.902 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10007msec) 00:24:18.902 slat (usec): min=6, max=125, avg=24.56, stdev=13.36 00:24:18.902 clat (usec): min=11793, max=20142, avg=13411.70, stdev=726.11 00:24:18.902 lat (usec): min=11812, max=20164, avg=13436.26, stdev=726.32 00:24:18.902 clat percentiles (usec): 00:24:18.902 | 1.00th=[12125], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:24:18.902 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:24:18.902 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:24:18.902 | 99.00th=[15139], 99.50th=[15795], 99.90th=[20055], 99.95th=[20055], 00:24:18.902 | 99.99th=[20055] 00:24:18.902 bw ( KiB/s): min=27648, max=29952, per=33.28%, avg=28456.37, stdev=541.74, samples=19 00:24:18.902 iops : min= 216, max= 234, avg=222.26, stdev= 4.16, samples=19 00:24:18.902 lat (msec) : 20=99.87%, 50=0.13% 00:24:18.902 cpu : usr=93.16%, sys=6.15%, ctx=8, majf=0, minf=0 00:24:18.902 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:18.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.902 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:18.902 filename0: (groupid=0, jobs=1): err= 0: pid=98420: Wed Jul 24 22:04:22 2024 00:24:18.902 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10008msec) 00:24:18.902 slat (usec): min=6, max=108, avg=26.80, stdev=15.24 00:24:18.902 clat (usec): min=11710, max=17359, avg=13411.09, stdev=718.02 00:24:18.902 lat (usec): min=11722, max=17379, avg=13437.88, stdev=719.04 00:24:18.902 clat percentiles (usec): 00:24:18.902 | 1.00th=[12125], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:24:18.902 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:24:18.902 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:24:18.902 | 99.00th=[15008], 99.50th=[16581], 99.90th=[17433], 99.95th=[17433], 00:24:18.902 | 99.99th=[17433] 00:24:18.902 bw ( KiB/s): min=27648, max=29952, per=33.32%, avg=28498.40, stdev=604.73, samples=20 00:24:18.902 iops : min= 216, max= 234, avg=222.60, stdev= 4.73, samples=20 00:24:18.902 lat (msec) : 20=100.00% 00:24:18.902 cpu : usr=95.20%, sys=4.29%, ctx=16, majf=0, minf=0 00:24:18.902 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:18.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.902 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.902 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:18.902 00:24:18.902 Run status group 0 (all jobs): 00:24:18.902 READ: bw=83.5MiB/s (87.6MB/s), 27.8MiB/s-27.8MiB/s (29.2MB/s-29.2MB/s), io=836MiB (876MB), run=10007-10009msec 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.902 00:24:18.902 real 0m10.975s 00:24:18.902 user 0m28.959s 00:24:18.902 sys 0m1.769s 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.902 ************************************ 00:24:18.902 END TEST fio_dif_digest 00:24:18.902 ************************************ 00:24:18.902 22:04:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:18.902 22:04:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:18.902 22:04:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.903 rmmod nvme_tcp 00:24:18.903 rmmod nvme_fabrics 00:24:18.903 rmmod nvme_keyring 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97677 ']' 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97677 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 97677 ']' 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 97677 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97677 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:18.903 killing process with pid 97677 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97677' 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@965 -- # kill 97677 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@970 -- # wait 97677 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:18.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:18.903 Waiting for block devices as requested 00:24:18.903 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:18.903 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.903 22:04:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:18.903 00:24:18.903 real 0m59.370s 00:24:18.903 user 3m49.062s 00:24:18.903 sys 0m17.555s 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.903 ************************************ 00:24:18.903 END TEST nvmf_dif 00:24:18.903 ************************************ 00:24:18.903 22:04:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 22:04:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.903 22:04:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:18.903 22:04:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:18.903 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 ************************************ 00:24:18.903 START TEST nvmf_abort_qd_sizes 00:24:18.903 ************************************ 00:24:18.903 22:04:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.903 * Looking for test storage... 00:24:18.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:18.903 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:18.904 Cannot find device "nvmf_tgt_br" 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:18.904 Cannot find device "nvmf_tgt_br2" 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:18.904 Cannot find device "nvmf_tgt_br" 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:18.904 Cannot find device "nvmf_tgt_br2" 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:18.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:18.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:18.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:24:18.904 00:24:18.904 --- 10.0.0.2 ping statistics --- 00:24:18.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.904 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:18.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:18.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:18.904 00:24:18.904 --- 10.0.0.3 ping statistics --- 00:24:18.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.904 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:18.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:18.904 00:24:18.904 --- 10.0.0.1 ping statistics --- 00:24:18.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.904 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:18.904 22:04:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:19.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:19.471 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.471 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99012 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99012 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 99012 ']' 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.730 22:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:19.730 [2024-07-24 22:04:25.320966] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:19.730 [2024-07-24 22:04:25.321570] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.993 [2024-07-24 22:04:25.461998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.993 [2024-07-24 22:04:25.533463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.993 [2024-07-24 22:04:25.533540] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.993 [2024-07-24 22:04:25.533555] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.993 [2024-07-24 22:04:25.533565] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.993 [2024-07-24 22:04:25.533584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.993 [2024-07-24 22:04:25.533770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.993 [2024-07-24 22:04:25.534510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.993 [2024-07-24 22:04:25.534633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.993 [2024-07-24 22:04:25.534642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.993 [2024-07-24 22:04:25.592113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:20.943 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 ************************************ 00:24:20.944 START TEST spdk_target_abort 00:24:20.944 ************************************ 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 spdk_targetn1 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 [2024-07-24 22:04:26.493927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:20.944 [2024-07-24 22:04:26.522078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:20.944 22:04:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:24.229 Initializing NVMe Controllers 00:24:24.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:24.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:24.229 Initialization complete. Launching workers. 00:24:24.229 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9945, failed: 0 00:24:24.229 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1072, failed to submit 8873 00:24:24.229 success 838, unsuccess 234, failed 0 00:24:24.229 22:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:24.229 22:04:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.520 Initializing NVMe Controllers 00:24:27.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:27.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:27.520 Initialization complete. Launching workers. 00:24:27.520 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8904, failed: 0 00:24:27.520 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1168, failed to submit 7736 00:24:27.520 success 350, unsuccess 818, failed 0 00:24:27.520 22:04:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:27.520 22:04:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.820 Initializing NVMe Controllers 00:24:30.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:30.820 Initialization complete. Launching workers. 00:24:30.820 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31855, failed: 0 00:24:30.820 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2422, failed to submit 29433 00:24:30.820 success 489, unsuccess 1933, failed 0 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.820 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99012 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 99012 ']' 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 99012 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99012 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:31.079 killing process with pid 99012 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99012' 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 99012 00:24:31.079 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 99012 00:24:31.338 00:24:31.338 real 0m10.456s 00:24:31.338 user 0m42.710s 00:24:31.338 sys 0m2.070s 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.338 ************************************ 00:24:31.338 END TEST spdk_target_abort 00:24:31.338 ************************************ 00:24:31.338 22:04:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:31.338 22:04:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:31.338 22:04:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:31.338 22:04:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:31.338 ************************************ 00:24:31.338 START TEST kernel_target_abort 00:24:31.338 ************************************ 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:31.338 22:04:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:31.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:31.596 Waiting for block devices as requested 00:24:31.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:31.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:31.854 No valid GPT data, bailing 00:24:31.854 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:32.113 No valid GPT data, bailing 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:32.113 No valid GPT data, bailing 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:32.113 No valid GPT data, bailing 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:32.113 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 --hostid=bee0c731-72a8-497b-84f7-4425e7deee11 -a 10.0.0.1 -t tcp -s 4420 00:24:32.372 00:24:32.372 Discovery Log Number of Records 2, Generation counter 2 00:24:32.372 =====Discovery Log Entry 0====== 00:24:32.372 trtype: tcp 00:24:32.372 adrfam: ipv4 00:24:32.372 subtype: current discovery subsystem 00:24:32.372 treq: not specified, sq flow control disable supported 00:24:32.372 portid: 1 00:24:32.372 trsvcid: 4420 00:24:32.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:32.372 traddr: 10.0.0.1 00:24:32.372 eflags: none 00:24:32.372 sectype: none 00:24:32.372 =====Discovery Log Entry 1====== 00:24:32.372 trtype: tcp 00:24:32.372 adrfam: ipv4 00:24:32.372 subtype: nvme subsystem 00:24:32.372 treq: not specified, sq flow control disable supported 00:24:32.372 portid: 1 00:24:32.372 trsvcid: 4420 00:24:32.372 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:32.372 traddr: 10.0.0.1 00:24:32.372 eflags: none 00:24:32.372 sectype: none 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:32.372 22:04:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:35.655 Initializing NVMe Controllers 00:24:35.655 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:35.655 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:35.655 Initialization complete. Launching workers. 00:24:35.655 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36101, failed: 0 00:24:35.655 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36101, failed to submit 0 00:24:35.655 success 0, unsuccess 36101, failed 0 00:24:35.655 22:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:35.655 22:04:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:38.943 Initializing NVMe Controllers 00:24:38.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:38.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:38.943 Initialization complete. Launching workers. 00:24:38.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70406, failed: 0 00:24:38.943 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30125, failed to submit 40281 00:24:38.943 success 0, unsuccess 30125, failed 0 00:24:38.943 22:04:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:38.943 22:04:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:42.232 Initializing NVMe Controllers 00:24:42.232 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:42.232 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:42.232 Initialization complete. Launching workers. 00:24:42.232 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79419, failed: 0 00:24:42.232 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19790, failed to submit 59629 00:24:42.232 success 0, unsuccess 19790, failed 0 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:42.232 22:04:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:42.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.426 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:43.426 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:43.426 00:24:43.426 real 0m12.076s 00:24:43.426 user 0m6.088s 00:24:43.426 sys 0m3.315s 00:24:43.426 22:04:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:43.426 22:04:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:43.426 ************************************ 00:24:43.426 END TEST kernel_target_abort 00:24:43.426 ************************************ 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.426 rmmod nvme_tcp 00:24:43.426 rmmod nvme_fabrics 00:24:43.426 rmmod nvme_keyring 00:24:43.426 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99012 ']' 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99012 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 99012 ']' 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 99012 00:24:43.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (99012) - No such process 00:24:43.427 Process with pid 99012 is not found 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 99012 is not found' 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:43.427 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:43.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.994 Waiting for block devices as requested 00:24:43.994 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.994 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.994 22:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:44.253 00:24:44.253 real 0m25.760s 00:24:44.253 user 0m49.974s 00:24:44.253 sys 0m6.699s 00:24:44.253 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:44.253 22:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:44.253 ************************************ 00:24:44.253 END TEST nvmf_abort_qd_sizes 00:24:44.253 ************************************ 00:24:44.253 22:04:49 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:44.253 22:04:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:44.253 22:04:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:44.253 22:04:49 -- common/autotest_common.sh@10 -- # set +x 00:24:44.253 ************************************ 00:24:44.253 START TEST keyring_file 00:24:44.253 ************************************ 00:24:44.253 22:04:49 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:44.253 * Looking for test storage... 00:24:44.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.254 22:04:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.254 22:04:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.254 22:04:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.254 22:04:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.254 22:04:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.254 22:04:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.254 22:04:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:44.254 22:04:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yJVQGz8LcB 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yJVQGz8LcB 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yJVQGz8LcB 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yJVQGz8LcB 00:24:44.254 22:04:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KiHJIKbT0x 00:24:44.254 22:04:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:44.254 22:04:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:44.523 22:04:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KiHJIKbT0x 00:24:44.523 22:04:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KiHJIKbT0x 00:24:44.523 22:04:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.KiHJIKbT0x 00:24:44.523 22:04:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=99870 00:24:44.523 22:04:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.523 22:04:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99870 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 99870 ']' 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:44.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:44.523 22:04:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 [2024-07-24 22:04:50.034281] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:44.523 [2024-07-24 22:04:50.034381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99870 ] 00:24:44.523 [2024-07-24 22:04:50.169020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.783 [2024-07-24 22:04:50.258819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.783 [2024-07-24 22:04:50.314604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:45.348 22:04:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.348 [2024-07-24 22:04:51.013925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.348 null0 00:24:45.348 [2024-07-24 22:04:51.045919] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.348 [2024-07-24 22:04:51.046203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:45.348 [2024-07-24 22:04:51.053923] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.348 22:04:51 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.348 22:04:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.607 [2024-07-24 22:04:51.065916] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:45.607 request: 00:24:45.607 { 00:24:45.607 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.607 "secure_channel": false, 00:24:45.607 "listen_address": { 00:24:45.607 "trtype": "tcp", 00:24:45.607 "traddr": "127.0.0.1", 00:24:45.607 "trsvcid": "4420" 00:24:45.607 }, 00:24:45.607 "method": "nvmf_subsystem_add_listener", 00:24:45.607 "req_id": 1 00:24:45.607 } 00:24:45.607 Got JSON-RPC error response 00:24:45.607 response: 00:24:45.607 { 00:24:45.607 "code": -32602, 00:24:45.607 "message": "Invalid parameters" 00:24:45.607 } 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:45.607 22:04:51 keyring_file -- keyring/file.sh@46 -- # bperfpid=99887 00:24:45.607 22:04:51 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99887 /var/tmp/bperf.sock 00:24:45.607 22:04:51 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 99887 ']' 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.607 22:04:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.607 [2024-07-24 22:04:51.131044] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:45.607 [2024-07-24 22:04:51.131174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99887 ] 00:24:45.607 [2024-07-24 22:04:51.268183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.865 [2024-07-24 22:04:51.354563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.865 [2024-07-24 22:04:51.412770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:46.432 22:04:52 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:46.432 22:04:52 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:46.432 22:04:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:46.432 22:04:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:46.691 22:04:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KiHJIKbT0x 00:24:46.691 22:04:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KiHJIKbT0x 00:24:46.949 22:04:52 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:46.949 22:04:52 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:46.949 22:04:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.949 22:04:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.949 22:04:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:47.208 22:04:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.yJVQGz8LcB == \/\t\m\p\/\t\m\p\.\y\J\V\Q\G\z\8\L\c\B ]] 00:24:47.208 22:04:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:47.208 22:04:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:47.208 22:04:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.208 22:04:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.208 22:04:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.467 22:04:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.KiHJIKbT0x == \/\t\m\p\/\t\m\p\.\K\i\H\J\I\K\b\T\0\x ]] 00:24:47.467 22:04:53 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:47.467 22:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:47.467 22:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.467 22:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.467 22:04:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.467 22:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:47.727 22:04:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:47.727 22:04:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:47.727 22:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:47.727 22:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.727 22:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.727 22:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.727 22:04:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.986 22:04:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:47.986 22:04:53 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:47.986 22:04:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.244 [2024-07-24 22:04:53.749068] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.244 nvme0n1 00:24:48.244 22:04:53 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:48.244 22:04:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.244 22:04:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:48.244 22:04:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.244 22:04:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:48.244 22:04:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.503 22:04:54 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:48.503 22:04:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:48.503 22:04:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:48.503 22:04:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.503 22:04:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.503 22:04:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.503 22:04:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:48.762 22:04:54 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:48.762 22:04:54 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.762 Running I/O for 1 seconds... 00:24:50.136 00:24:50.136 Latency(us) 00:24:50.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.136 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:50.136 nvme0n1 : 1.01 11840.31 46.25 0.00 0.00 10769.49 5838.66 22878.02 00:24:50.136 =================================================================================================================== 00:24:50.136 Total : 11840.31 46.25 0.00 0.00 10769.49 5838.66 22878.02 00:24:50.136 0 00:24:50.136 22:04:55 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:50.136 22:04:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.136 22:04:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.394 22:04:56 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:50.394 22:04:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:50.394 22:04:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:50.394 22:04:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.394 22:04:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:50.394 22:04:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.394 22:04:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.652 22:04:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:50.652 22:04:56 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.652 22:04:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.652 22:04:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.910 [2024-07-24 22:04:56.520509] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:50.910 [2024-07-24 22:04:56.520955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85530 (107): Transport endpoint is not connected 00:24:50.910 [2024-07-24 22:04:56.521944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85530 (9): Bad file descriptor 00:24:50.910 [2024-07-24 22:04:56.522949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.910 [2024-07-24 22:04:56.522986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:50.910 [2024-07-24 22:04:56.523012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.910 request: 00:24:50.910 { 00:24:50.910 "name": "nvme0", 00:24:50.910 "trtype": "tcp", 00:24:50.910 "traddr": "127.0.0.1", 00:24:50.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:50.910 "adrfam": "ipv4", 00:24:50.910 "trsvcid": "4420", 00:24:50.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.910 "psk": "key1", 00:24:50.910 "method": "bdev_nvme_attach_controller", 00:24:50.910 "req_id": 1 00:24:50.910 } 00:24:50.910 Got JSON-RPC error response 00:24:50.910 response: 00:24:50.910 { 00:24:50.910 "code": -5, 00:24:50.910 "message": "Input/output error" 00:24:50.910 } 00:24:50.910 22:04:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:50.910 22:04:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.910 22:04:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.910 22:04:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.910 22:04:56 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:50.910 22:04:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.910 22:04:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.910 22:04:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.910 22:04:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.910 22:04:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.168 22:04:56 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:51.168 22:04:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:51.168 22:04:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:51.168 22:04:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:51.168 22:04:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:51.168 22:04:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.168 22:04:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:51.426 22:04:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:51.426 22:04:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:51.426 22:04:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:51.684 22:04:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:51.684 22:04:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:51.943 22:04:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:51.943 22:04:57 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:51.943 22:04:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.201 22:04:57 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:52.201 22:04:57 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.yJVQGz8LcB 00:24:52.201 22:04:57 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.201 22:04:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.201 22:04:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.459 [2024-07-24 22:04:58.008373] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yJVQGz8LcB': 0100660 00:24:52.459 [2024-07-24 22:04:58.008418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:52.459 request: 00:24:52.459 { 00:24:52.459 "name": "key0", 00:24:52.459 "path": "/tmp/tmp.yJVQGz8LcB", 00:24:52.459 "method": "keyring_file_add_key", 00:24:52.459 "req_id": 1 00:24:52.459 } 00:24:52.459 Got JSON-RPC error response 00:24:52.459 response: 00:24:52.459 { 00:24:52.459 "code": -1, 00:24:52.459 "message": "Operation not permitted" 00:24:52.459 } 00:24:52.459 22:04:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:52.459 22:04:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.459 22:04:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.459 22:04:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.459 22:04:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.yJVQGz8LcB 00:24:52.459 22:04:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.459 22:04:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yJVQGz8LcB 00:24:52.717 22:04:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.yJVQGz8LcB 00:24:52.718 22:04:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:52.718 22:04:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:52.718 22:04:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:52.718 22:04:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:52.718 22:04:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:52.718 22:04:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.976 22:04:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:52.976 22:04:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.976 22:04:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.976 22:04:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.234 [2024-07-24 22:04:58.712543] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yJVQGz8LcB': No such file or directory 00:24:53.234 [2024-07-24 22:04:58.712595] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:53.234 [2024-07-24 22:04:58.712664] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:53.234 [2024-07-24 22:04:58.712675] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:53.234 [2024-07-24 22:04:58.712684] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:53.234 request: 00:24:53.234 { 00:24:53.235 "name": "nvme0", 00:24:53.235 "trtype": "tcp", 00:24:53.235 "traddr": "127.0.0.1", 00:24:53.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:53.235 "adrfam": "ipv4", 00:24:53.235 "trsvcid": "4420", 00:24:53.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.235 "psk": "key0", 00:24:53.235 "method": "bdev_nvme_attach_controller", 00:24:53.235 "req_id": 1 00:24:53.235 } 00:24:53.235 Got JSON-RPC error response 00:24:53.235 response: 00:24:53.235 { 00:24:53.235 "code": -19, 00:24:53.235 "message": "No such device" 00:24:53.235 } 00:24:53.235 22:04:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:53.235 22:04:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.235 22:04:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.235 22:04:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.235 22:04:58 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:53.235 22:04:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:53.493 22:04:58 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IWzD6FqYFA 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:53.493 22:04:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IWzD6FqYFA 00:24:53.493 22:04:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IWzD6FqYFA 00:24:53.493 22:04:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IWzD6FqYFA 00:24:53.493 22:04:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWzD6FqYFA 00:24:53.493 22:04:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWzD6FqYFA 00:24:53.752 22:04:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.752 22:04:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:54.010 nvme0n1 00:24:54.010 22:04:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:54.010 22:04:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:54.010 22:04:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:54.010 22:04:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.010 22:04:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.010 22:04:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.269 22:04:59 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:54.269 22:04:59 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:54.269 22:04:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:54.527 22:05:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:54.527 22:05:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:54.527 22:05:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.527 22:05:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.527 22:05:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.786 22:05:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:54.786 22:05:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:54.786 22:05:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:54.786 22:05:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:54.786 22:05:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.786 22:05:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.786 22:05:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.045 22:05:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:55.045 22:05:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:55.045 22:05:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:55.304 22:05:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:55.304 22:05:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.304 22:05:00 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:55.562 22:05:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:55.562 22:05:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IWzD6FqYFA 00:24:55.562 22:05:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IWzD6FqYFA 00:24:55.821 22:05:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KiHJIKbT0x 00:24:55.821 22:05:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KiHJIKbT0x 00:24:56.079 22:05:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:56.079 22:05:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:56.338 nvme0n1 00:24:56.338 22:05:01 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:56.338 22:05:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:56.597 22:05:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:56.597 "subsystems": [ 00:24:56.597 { 00:24:56.597 "subsystem": "keyring", 00:24:56.597 "config": [ 00:24:56.597 { 00:24:56.597 "method": "keyring_file_add_key", 00:24:56.597 "params": { 00:24:56.597 "name": "key0", 00:24:56.597 "path": "/tmp/tmp.IWzD6FqYFA" 00:24:56.597 } 00:24:56.597 }, 00:24:56.597 { 00:24:56.597 "method": "keyring_file_add_key", 00:24:56.597 "params": { 00:24:56.597 "name": "key1", 00:24:56.597 "path": "/tmp/tmp.KiHJIKbT0x" 00:24:56.597 } 00:24:56.597 } 00:24:56.597 ] 00:24:56.597 }, 00:24:56.597 { 00:24:56.597 "subsystem": "iobuf", 00:24:56.597 "config": [ 00:24:56.597 { 00:24:56.597 "method": "iobuf_set_options", 00:24:56.597 "params": { 00:24:56.597 "small_pool_count": 8192, 00:24:56.597 "large_pool_count": 1024, 00:24:56.597 "small_bufsize": 8192, 00:24:56.597 "large_bufsize": 135168 00:24:56.597 } 00:24:56.597 } 00:24:56.597 ] 00:24:56.597 }, 00:24:56.597 { 00:24:56.597 "subsystem": "sock", 00:24:56.597 "config": [ 00:24:56.597 { 00:24:56.597 "method": "sock_set_default_impl", 00:24:56.597 "params": { 00:24:56.597 "impl_name": "uring" 00:24:56.597 } 00:24:56.597 }, 00:24:56.597 { 00:24:56.597 "method": "sock_impl_set_options", 00:24:56.597 "params": { 00:24:56.597 "impl_name": "ssl", 00:24:56.597 "recv_buf_size": 4096, 00:24:56.597 "send_buf_size": 4096, 00:24:56.597 "enable_recv_pipe": true, 00:24:56.597 "enable_quickack": false, 00:24:56.597 "enable_placement_id": 0, 00:24:56.597 "enable_zerocopy_send_server": true, 00:24:56.597 "enable_zerocopy_send_client": false, 00:24:56.597 "zerocopy_threshold": 0, 00:24:56.597 "tls_version": 0, 00:24:56.598 "enable_ktls": false 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "sock_impl_set_options", 00:24:56.598 "params": { 00:24:56.598 "impl_name": "posix", 00:24:56.598 "recv_buf_size": 2097152, 00:24:56.598 "send_buf_size": 2097152, 00:24:56.598 "enable_recv_pipe": true, 00:24:56.598 "enable_quickack": false, 00:24:56.598 "enable_placement_id": 0, 00:24:56.598 "enable_zerocopy_send_server": true, 00:24:56.598 "enable_zerocopy_send_client": false, 00:24:56.598 "zerocopy_threshold": 0, 00:24:56.598 "tls_version": 0, 00:24:56.598 "enable_ktls": false 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "sock_impl_set_options", 00:24:56.598 "params": { 00:24:56.598 "impl_name": "uring", 00:24:56.598 "recv_buf_size": 2097152, 00:24:56.598 "send_buf_size": 2097152, 00:24:56.598 "enable_recv_pipe": true, 00:24:56.598 "enable_quickack": false, 00:24:56.598 "enable_placement_id": 0, 00:24:56.598 "enable_zerocopy_send_server": false, 00:24:56.598 "enable_zerocopy_send_client": false, 00:24:56.598 "zerocopy_threshold": 0, 00:24:56.598 "tls_version": 0, 00:24:56.598 "enable_ktls": false 00:24:56.598 } 00:24:56.598 } 00:24:56.598 ] 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "subsystem": "vmd", 00:24:56.598 "config": [] 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "subsystem": "accel", 00:24:56.598 "config": [ 00:24:56.598 { 00:24:56.598 "method": "accel_set_options", 00:24:56.598 "params": { 00:24:56.598 "small_cache_size": 128, 00:24:56.598 "large_cache_size": 16, 00:24:56.598 "task_count": 2048, 00:24:56.598 "sequence_count": 2048, 00:24:56.598 "buf_count": 2048 00:24:56.598 } 00:24:56.598 } 00:24:56.598 ] 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "subsystem": "bdev", 00:24:56.598 "config": [ 00:24:56.598 { 00:24:56.598 "method": "bdev_set_options", 00:24:56.598 "params": { 00:24:56.598 "bdev_io_pool_size": 65535, 00:24:56.598 "bdev_io_cache_size": 256, 00:24:56.598 "bdev_auto_examine": true, 00:24:56.598 "iobuf_small_cache_size": 128, 00:24:56.598 "iobuf_large_cache_size": 16 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_raid_set_options", 00:24:56.598 "params": { 00:24:56.598 "process_window_size_kb": 1024 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_iscsi_set_options", 00:24:56.598 "params": { 00:24:56.598 "timeout_sec": 30 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_nvme_set_options", 00:24:56.598 "params": { 00:24:56.598 "action_on_timeout": "none", 00:24:56.598 "timeout_us": 0, 00:24:56.598 "timeout_admin_us": 0, 00:24:56.598 "keep_alive_timeout_ms": 10000, 00:24:56.598 "arbitration_burst": 0, 00:24:56.598 "low_priority_weight": 0, 00:24:56.598 "medium_priority_weight": 0, 00:24:56.598 "high_priority_weight": 0, 00:24:56.598 "nvme_adminq_poll_period_us": 10000, 00:24:56.598 "nvme_ioq_poll_period_us": 0, 00:24:56.598 "io_queue_requests": 512, 00:24:56.598 "delay_cmd_submit": true, 00:24:56.598 "transport_retry_count": 4, 00:24:56.598 "bdev_retry_count": 3, 00:24:56.598 "transport_ack_timeout": 0, 00:24:56.598 "ctrlr_loss_timeout_sec": 0, 00:24:56.598 "reconnect_delay_sec": 0, 00:24:56.598 "fast_io_fail_timeout_sec": 0, 00:24:56.598 "disable_auto_failback": false, 00:24:56.598 "generate_uuids": false, 00:24:56.598 "transport_tos": 0, 00:24:56.598 "nvme_error_stat": false, 00:24:56.598 "rdma_srq_size": 0, 00:24:56.598 "io_path_stat": false, 00:24:56.598 "allow_accel_sequence": false, 00:24:56.598 "rdma_max_cq_size": 0, 00:24:56.598 "rdma_cm_event_timeout_ms": 0, 00:24:56.598 "dhchap_digests": [ 00:24:56.598 "sha256", 00:24:56.598 "sha384", 00:24:56.598 "sha512" 00:24:56.598 ], 00:24:56.598 "dhchap_dhgroups": [ 00:24:56.598 "null", 00:24:56.598 "ffdhe2048", 00:24:56.598 "ffdhe3072", 00:24:56.598 "ffdhe4096", 00:24:56.598 "ffdhe6144", 00:24:56.598 "ffdhe8192" 00:24:56.598 ] 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_nvme_attach_controller", 00:24:56.598 "params": { 00:24:56.598 "name": "nvme0", 00:24:56.598 "trtype": "TCP", 00:24:56.598 "adrfam": "IPv4", 00:24:56.598 "traddr": "127.0.0.1", 00:24:56.598 "trsvcid": "4420", 00:24:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:56.598 "prchk_reftag": false, 00:24:56.598 "prchk_guard": false, 00:24:56.598 "ctrlr_loss_timeout_sec": 0, 00:24:56.598 "reconnect_delay_sec": 0, 00:24:56.598 "fast_io_fail_timeout_sec": 0, 00:24:56.598 "psk": "key0", 00:24:56.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:56.598 "hdgst": false, 00:24:56.598 "ddgst": false 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_nvme_set_hotplug", 00:24:56.598 "params": { 00:24:56.598 "period_us": 100000, 00:24:56.598 "enable": false 00:24:56.598 } 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "method": "bdev_wait_for_examine" 00:24:56.598 } 00:24:56.598 ] 00:24:56.598 }, 00:24:56.598 { 00:24:56.598 "subsystem": "nbd", 00:24:56.598 "config": [] 00:24:56.598 } 00:24:56.598 ] 00:24:56.598 }' 00:24:56.598 22:05:02 keyring_file -- keyring/file.sh@114 -- # killprocess 99887 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 99887 ']' 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@950 -- # kill -0 99887 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99887 00:24:56.598 killing process with pid 99887 00:24:56.598 Received shutdown signal, test time was about 1.000000 seconds 00:24:56.598 00:24:56.598 Latency(us) 00:24:56.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.598 =================================================================================================================== 00:24:56.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99887' 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@965 -- # kill 99887 00:24:56.598 22:05:02 keyring_file -- common/autotest_common.sh@970 -- # wait 99887 00:24:56.858 22:05:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=100131 00:24:56.858 22:05:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100131 /var/tmp/bperf.sock 00:24:56.858 22:05:02 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 100131 ']' 00:24:56.858 22:05:02 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.858 22:05:02 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:56.858 22:05:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:56.858 "subsystems": [ 00:24:56.858 { 00:24:56.858 "subsystem": "keyring", 00:24:56.858 "config": [ 00:24:56.858 { 00:24:56.858 "method": "keyring_file_add_key", 00:24:56.858 "params": { 00:24:56.858 "name": "key0", 00:24:56.858 "path": "/tmp/tmp.IWzD6FqYFA" 00:24:56.858 } 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "method": "keyring_file_add_key", 00:24:56.858 "params": { 00:24:56.858 "name": "key1", 00:24:56.858 "path": "/tmp/tmp.KiHJIKbT0x" 00:24:56.858 } 00:24:56.858 } 00:24:56.858 ] 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "subsystem": "iobuf", 00:24:56.858 "config": [ 00:24:56.858 { 00:24:56.858 "method": "iobuf_set_options", 00:24:56.858 "params": { 00:24:56.858 "small_pool_count": 8192, 00:24:56.858 "large_pool_count": 1024, 00:24:56.858 "small_bufsize": 8192, 00:24:56.858 "large_bufsize": 135168 00:24:56.858 } 00:24:56.858 } 00:24:56.858 ] 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "subsystem": "sock", 00:24:56.858 "config": [ 00:24:56.858 { 00:24:56.858 "method": "sock_set_default_impl", 00:24:56.858 "params": { 00:24:56.858 "impl_name": "uring" 00:24:56.858 } 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "method": "sock_impl_set_options", 00:24:56.858 "params": { 00:24:56.858 "impl_name": "ssl", 00:24:56.858 "recv_buf_size": 4096, 00:24:56.858 "send_buf_size": 4096, 00:24:56.858 "enable_recv_pipe": true, 00:24:56.858 "enable_quickack": false, 00:24:56.858 "enable_placement_id": 0, 00:24:56.858 "enable_zerocopy_send_server": true, 00:24:56.858 "enable_zerocopy_send_client": false, 00:24:56.858 "zerocopy_threshold": 0, 00:24:56.858 "tls_version": 0, 00:24:56.858 "enable_ktls": false 00:24:56.858 } 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "method": "sock_impl_set_options", 00:24:56.858 "params": { 00:24:56.858 "impl_name": "posix", 00:24:56.858 "recv_buf_size": 2097152, 00:24:56.858 "send_buf_size": 2097152, 00:24:56.858 "enable_recv_pipe": true, 00:24:56.858 "enable_quickack": false, 00:24:56.858 "enable_placement_id": 0, 00:24:56.858 "enable_zerocopy_send_server": true, 00:24:56.858 "enable_zerocopy_send_client": false, 00:24:56.858 "zerocopy_threshold": 0, 00:24:56.858 "tls_version": 0, 00:24:56.858 "enable_ktls": false 00:24:56.858 } 00:24:56.858 }, 00:24:56.858 { 00:24:56.858 "method": "sock_impl_set_options", 00:24:56.858 "params": { 00:24:56.858 "impl_name": "uring", 00:24:56.858 "recv_buf_size": 2097152, 00:24:56.858 "send_buf_size": 2097152, 00:24:56.858 "enable_recv_pipe": true, 00:24:56.858 "enable_quickack": false, 00:24:56.858 "enable_placement_id": 0, 00:24:56.858 "enable_zerocopy_send_server": false, 00:24:56.859 "enable_zerocopy_send_client": false, 00:24:56.859 "zerocopy_threshold": 0, 00:24:56.859 "tls_version": 0, 00:24:56.859 "enable_ktls": false 00:24:56.859 } 00:24:56.859 } 00:24:56.859 ] 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "subsystem": "vmd", 00:24:56.859 "config": [] 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "subsystem": "accel", 00:24:56.859 "config": [ 00:24:56.859 { 00:24:56.859 "method": "accel_set_options", 00:24:56.859 "params": { 00:24:56.859 "small_cache_size": 128, 00:24:56.859 "large_cache_size": 16, 00:24:56.859 "task_count": 2048, 00:24:56.859 "sequence_count": 2048, 00:24:56.859 "buf_count": 2048 00:24:56.859 } 00:24:56.859 } 00:24:56.859 ] 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "subsystem": "bdev", 00:24:56.859 "config": [ 00:24:56.859 { 00:24:56.859 "method": "bdev_set_options", 00:24:56.859 "params": { 00:24:56.859 "bdev_io_pool_size": 65535, 00:24:56.859 "bdev_io_cache_size": 256, 00:24:56.859 "bdev_auto_examine": true, 00:24:56.859 "iobuf_small_cache_size": 128, 00:24:56.859 "iobuf_large_cache_size": 16 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_raid_set_options", 00:24:56.859 "params": { 00:24:56.859 "process_window_size_kb": 1024 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_iscsi_set_options", 00:24:56.859 "params": { 00:24:56.859 "timeout_sec": 30 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_nvme_set_options", 00:24:56.859 "params": { 00:24:56.859 "action_on_timeout": "none", 00:24:56.859 "timeout_us": 0, 00:24:56.859 "timeout_admin_us": 0, 00:24:56.859 "keep_alive_timeout_ms": 10000, 00:24:56.859 "arbitration_burst": 0, 00:24:56.859 "low_priority_weight": 0, 00:24:56.859 "medium_priority_weight": 0, 00:24:56.859 "high_priority_weight": 0, 00:24:56.859 "nvme_adminq_poll_period_us": 10000, 00:24:56.859 "nvme_ioq_poll_period_us": 0, 00:24:56.859 "io_queue_requests": 512, 00:24:56.859 "delay_cmd_submit": true, 00:24:56.859 "transport_retry_count": 4, 00:24:56.859 "bdev_retry_count": 3, 00:24:56.859 "transport_ack_timeout": 0, 00:24:56.859 "ctrlr_loss_timeout_sec": 0, 00:24:56.859 "reconnect_delay_sec": 0, 00:24:56.859 "fast_io_fail_timeout_sec": 0, 00:24:56.859 "disable_auto_failback": false, 00:24:56.859 "generate_uuids": false, 00:24:56.859 "transport_tos": 0, 00:24:56.859 "nvme_error_stat": false, 00:24:56.859 "rdma_srq_size": 0, 00:24:56.859 "io_path_stat": false, 00:24:56.859 "allow_accel_sequence": false, 00:24:56.859 "rdma_max_cq_size": 0, 00:24:56.859 "rdma_cm_event_timeout_ms": 0, 00:24:56.859 "dhchap_digests": [ 00:24:56.859 "sha256", 00:24:56.859 "sha384", 00:24:56.859 "sha512" 00:24:56.859 ], 00:24:56.859 "dhchap_dhgroups": [ 00:24:56.859 "null", 00:24:56.859 "ffdhe2048", 00:24:56.859 "ffdhe3072", 00:24:56.859 "ffdhe4096", 00:24:56.859 "ffdhe6144", 00:24:56.859 "ffdhe8192" 00:24:56.859 ] 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_nvme_attach_controller", 00:24:56.859 "params": { 00:24:56.859 "name": "nvme0", 00:24:56.859 "trtype": "TCP", 00:24:56.859 "adrfam": "IPv4", 00:24:56.859 "traddr": "127.0.0.1", 00:24:56.859 "trsvcid": "4420", 00:24:56.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:56.859 "prchk_reftag": false, 00:24:56.859 "prchk_guard": false, 00:24:56.859 "ctrlr_loss_timeout_sec": 0, 00:24:56.859 "reconnect_delay_sec": 0, 00:24:56.859 "fast_io_fail_timeout_sec": 0, 00:24:56.859 "psk": "key0", 00:24:56.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:56.859 "hdgst": false, 00:24:56.859 "ddgst": false 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_nvme_set_hotplug", 00:24:56.859 "params": { 00:24:56.859 "period_us": 100000, 00:24:56.859 "enable": false 00:24:56.859 } 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "method": "bdev_wait_for_examine" 00:24:56.859 } 00:24:56.859 ] 00:24:56.859 }, 00:24:56.859 { 00:24:56.859 "subsystem": "nbd", 00:24:56.859 "config": [] 00:24:56.859 } 00:24:56.859 ] 00:24:56.859 }' 00:24:56.859 22:05:02 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:56.859 22:05:02 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.859 22:05:02 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:56.859 22:05:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:56.859 [2024-07-24 22:05:02.466104] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:56.859 [2024-07-24 22:05:02.466200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100131 ] 00:24:57.118 [2024-07-24 22:05:02.601751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.118 [2024-07-24 22:05:02.684854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.118 [2024-07-24 22:05:02.821413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:57.377 [2024-07-24 22:05:02.874017] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.943 22:05:03 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:57.943 22:05:03 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:57.943 22:05:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:57.943 22:05:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:57.943 22:05:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.202 22:05:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:58.202 22:05:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:58.202 22:05:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:58.202 22:05:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.202 22:05:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.202 22:05:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.202 22:05:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.461 22:05:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:58.461 22:05:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:58.461 22:05:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:58.461 22:05:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.461 22:05:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.461 22:05:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.461 22:05:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.720 22:05:04 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:58.720 22:05:04 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:58.720 22:05:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:58.720 22:05:04 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:58.980 22:05:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:58.980 22:05:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:58.980 22:05:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IWzD6FqYFA /tmp/tmp.KiHJIKbT0x 00:24:58.980 22:05:04 keyring_file -- keyring/file.sh@20 -- # killprocess 100131 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 100131 ']' 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@950 -- # kill -0 100131 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100131 00:24:58.980 killing process with pid 100131 00:24:58.980 Received shutdown signal, test time was about 1.000000 seconds 00:24:58.980 00:24:58.980 Latency(us) 00:24:58.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.980 =================================================================================================================== 00:24:58.980 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100131' 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@965 -- # kill 100131 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@970 -- # wait 100131 00:24:58.980 22:05:04 keyring_file -- keyring/file.sh@21 -- # killprocess 99870 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 99870 ']' 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@950 -- # kill -0 99870 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:58.980 22:05:04 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99870 00:24:59.239 killing process with pid 99870 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99870' 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@965 -- # kill 99870 00:24:59.239 [2024-07-24 22:05:04.719143] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:59.239 22:05:04 keyring_file -- common/autotest_common.sh@970 -- # wait 99870 00:24:59.498 00:24:59.498 real 0m15.345s 00:24:59.498 user 0m38.092s 00:24:59.498 sys 0m3.025s 00:24:59.498 22:05:05 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.498 ************************************ 00:24:59.498 END TEST keyring_file 00:24:59.498 ************************************ 00:24:59.498 22:05:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:59.498 22:05:05 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:59.498 22:05:05 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:59.498 22:05:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:59.498 22:05:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.498 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:24:59.498 ************************************ 00:24:59.498 START TEST keyring_linux 00:24:59.498 ************************************ 00:24:59.498 22:05:05 keyring_linux -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:59.757 * Looking for test storage... 00:24:59.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:59.757 22:05:05 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:59.757 22:05:05 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bee0c731-72a8-497b-84f7-4425e7deee11 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bee0c731-72a8-497b-84f7-4425e7deee11 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.757 22:05:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.758 22:05:05 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.758 22:05:05 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.758 22:05:05 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.758 22:05:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.758 22:05:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.758 22:05:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.758 22:05:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:59.758 22:05:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:59.758 /tmp/:spdk-test:key0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:59.758 22:05:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:59.758 /tmp/:spdk-test:key1 00:24:59.758 22:05:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100244 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:59.758 22:05:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100244 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 100244 ']' 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.758 22:05:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:59.758 [2024-07-24 22:05:05.437494] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:59.758 [2024-07-24 22:05:05.437602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100244 ] 00:25:00.017 [2024-07-24 22:05:05.576806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.017 [2024-07-24 22:05:05.653782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.017 [2024-07-24 22:05:05.709260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:00.953 [2024-07-24 22:05:06.412999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.953 null0 00:25:00.953 [2024-07-24 22:05:06.444996] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:00.953 [2024-07-24 22:05:06.445245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:00.953 876340967 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:00.953 846584089 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100262 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:00.953 22:05:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100262 /var/tmp/bperf.sock 00:25:00.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 100262 ']' 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.953 22:05:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:00.953 [2024-07-24 22:05:06.526974] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:00.953 [2024-07-24 22:05:06.527283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100262 ] 00:25:00.953 [2024-07-24 22:05:06.668347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.212 [2024-07-24 22:05:06.749220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.780 22:05:07 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:01.780 22:05:07 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:25:01.780 22:05:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:01.780 22:05:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:02.039 22:05:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:02.039 22:05:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:02.298 [2024-07-24 22:05:07.903789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:02.298 22:05:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:02.298 22:05:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:02.556 [2024-07-24 22:05:08.164282] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.556 nvme0n1 00:25:02.556 22:05:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:02.556 22:05:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:02.556 22:05:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:02.556 22:05:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:02.556 22:05:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:02.556 22:05:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:03.124 22:05:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.124 22:05:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:03.124 22:05:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@25 -- # sn=876340967 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 876340967 == \8\7\6\3\4\0\9\6\7 ]] 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 876340967 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:03.124 22:05:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.384 Running I/O for 1 seconds... 00:25:04.319 00:25:04.319 Latency(us) 00:25:04.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.319 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:04.319 nvme0n1 : 1.01 11797.22 46.08 0.00 0.00 10787.13 7804.74 16801.05 00:25:04.319 =================================================================================================================== 00:25:04.319 Total : 11797.22 46.08 0.00 0.00 10787.13 7804.74 16801.05 00:25:04.319 0 00:25:04.319 22:05:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:04.319 22:05:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:04.578 22:05:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:04.578 22:05:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:04.578 22:05:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:04.578 22:05:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:04.578 22:05:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.578 22:05:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:04.836 22:05:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:04.836 22:05:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:04.836 22:05:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:04.836 22:05:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:04.836 22:05:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:04.836 22:05:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:05.096 [2024-07-24 22:05:10.699149] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:05.096 [2024-07-24 22:05:10.699544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df470 (107): Transport endpoint is not connected 00:25:05.096 [2024-07-24 22:05:10.700535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df470 (9): Bad file descriptor 00:25:05.096 [2024-07-24 22:05:10.701531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.096 [2024-07-24 22:05:10.701553] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:05.096 [2024-07-24 22:05:10.701563] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.096 request: 00:25:05.096 { 00:25:05.096 "name": "nvme0", 00:25:05.096 "trtype": "tcp", 00:25:05.096 "traddr": "127.0.0.1", 00:25:05.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.096 "adrfam": "ipv4", 00:25:05.096 "trsvcid": "4420", 00:25:05.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.096 "psk": ":spdk-test:key1", 00:25:05.096 "method": "bdev_nvme_attach_controller", 00:25:05.096 "req_id": 1 00:25:05.096 } 00:25:05.096 Got JSON-RPC error response 00:25:05.096 response: 00:25:05.096 { 00:25:05.096 "code": -5, 00:25:05.096 "message": "Input/output error" 00:25:05.096 } 00:25:05.096 22:05:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:05.096 22:05:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.096 22:05:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.096 22:05:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@33 -- # sn=876340967 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 876340967 00:25:05.096 1 links removed 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@33 -- # sn=846584089 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 846584089 00:25:05.096 1 links removed 00:25:05.096 22:05:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100262 00:25:05.096 22:05:10 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 100262 ']' 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 100262 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100262 00:25:05.097 killing process with pid 100262 00:25:05.097 Received shutdown signal, test time was about 1.000000 seconds 00:25:05.097 00:25:05.097 Latency(us) 00:25:05.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.097 =================================================================================================================== 00:25:05.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100262' 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@965 -- # kill 100262 00:25:05.097 22:05:10 keyring_linux -- common/autotest_common.sh@970 -- # wait 100262 00:25:05.356 22:05:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100244 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 100244 ']' 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 100244 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100244 00:25:05.356 killing process with pid 100244 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100244' 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@965 -- # kill 100244 00:25:05.356 22:05:10 keyring_linux -- common/autotest_common.sh@970 -- # wait 100244 00:25:05.923 00:25:05.923 real 0m6.209s 00:25:05.923 user 0m11.948s 00:25:05.923 sys 0m1.530s 00:25:05.923 22:05:11 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:05.923 22:05:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:05.923 ************************************ 00:25:05.923 END TEST keyring_linux 00:25:05.923 ************************************ 00:25:05.923 22:05:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:05.923 22:05:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:05.923 22:05:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:05.923 22:05:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:05.923 22:05:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:05.923 22:05:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:05.923 22:05:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:05.923 22:05:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:05.923 22:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:05.923 22:05:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:05.923 22:05:11 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:05.923 22:05:11 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:05.923 22:05:11 -- common/autotest_common.sh@10 -- # set +x 00:25:07.301 INFO: APP EXITING 00:25:07.301 INFO: killing all VMs 00:25:07.301 INFO: killing vhost app 00:25:07.301 INFO: EXIT DONE 00:25:08.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.237 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:08.237 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:08.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.804 Cleaning 00:25:08.804 Removing: /var/run/dpdk/spdk0/config 00:25:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:08.804 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:08.804 Removing: /var/run/dpdk/spdk1/config 00:25:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:08.804 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:08.804 Removing: /var/run/dpdk/spdk2/config 00:25:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:08.804 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:08.804 Removing: /var/run/dpdk/spdk3/config 00:25:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:08.804 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:08.804 Removing: /var/run/dpdk/spdk4/config 00:25:08.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:08.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:08.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:08.804 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:08.804 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:08.804 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:08.804 Removing: /dev/shm/nvmf_trace.0 00:25:08.804 Removing: /dev/shm/spdk_tgt_trace.pid70795 00:25:08.804 Removing: /var/run/dpdk/spdk0 00:25:08.804 Removing: /var/run/dpdk/spdk1 00:25:08.804 Removing: /var/run/dpdk/spdk2 00:25:08.804 Removing: /var/run/dpdk/spdk3 00:25:08.804 Removing: /var/run/dpdk/spdk4 00:25:08.804 Removing: /var/run/dpdk/spdk_pid100131 00:25:08.804 Removing: /var/run/dpdk/spdk_pid100244 00:25:08.804 Removing: /var/run/dpdk/spdk_pid100262 00:25:08.804 Removing: /var/run/dpdk/spdk_pid70650 00:25:09.063 Removing: /var/run/dpdk/spdk_pid70795 00:25:09.063 Removing: /var/run/dpdk/spdk_pid70993 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71074 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71107 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71211 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71229 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71347 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71543 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71678 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71748 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71816 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71902 00:25:09.063 Removing: /var/run/dpdk/spdk_pid71979 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72012 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72048 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72109 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72198 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72636 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72690 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72729 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72745 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72812 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72828 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72895 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72911 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72957 00:25:09.063 Removing: /var/run/dpdk/spdk_pid72975 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73022 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73040 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73157 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73194 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73268 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73320 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73344 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73403 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73437 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73472 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73506 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73541 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73575 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73610 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73639 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73679 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73708 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73748 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73777 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73817 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73846 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73886 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73915 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73954 00:25:09.063 Removing: /var/run/dpdk/spdk_pid73987 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74025 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74059 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74095 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74159 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74252 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74560 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74572 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74609 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74622 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74638 00:25:09.063 Removing: /var/run/dpdk/spdk_pid74658 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74676 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74691 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74710 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74724 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74745 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74764 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74783 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74793 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74817 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74831 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74852 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74871 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74879 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74900 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74936 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74944 00:25:09.064 Removing: /var/run/dpdk/spdk_pid74979 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75043 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75072 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75081 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75115 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75119 00:25:09.064 Removing: /var/run/dpdk/spdk_pid75132 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75169 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75188 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75217 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75226 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75236 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75245 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75260 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75264 00:25:09.322 Removing: /var/run/dpdk/spdk_pid75279 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75289 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75317 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75344 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75353 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75382 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75391 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75399 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75445 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75457 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75484 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75491 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75499 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75512 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75514 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75527 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75533 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75542 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75616 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75658 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75768 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75807 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75847 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75861 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75883 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75903 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75935 00:25:09.323 Removing: /var/run/dpdk/spdk_pid75950 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76020 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76042 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76086 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76162 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76227 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76256 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76342 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76390 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76423 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76641 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76733 00:25:09.323 Removing: /var/run/dpdk/spdk_pid76762 00:25:09.323 Removing: /var/run/dpdk/spdk_pid77080 00:25:09.323 Removing: /var/run/dpdk/spdk_pid77118 00:25:09.323 Removing: /var/run/dpdk/spdk_pid77413 00:25:09.323 Removing: /var/run/dpdk/spdk_pid77821 00:25:09.323 Removing: /var/run/dpdk/spdk_pid78091 00:25:09.323 Removing: /var/run/dpdk/spdk_pid78871 00:25:09.323 Removing: /var/run/dpdk/spdk_pid79688 00:25:09.323 Removing: /var/run/dpdk/spdk_pid79800 00:25:09.323 Removing: /var/run/dpdk/spdk_pid79871 00:25:09.323 Removing: /var/run/dpdk/spdk_pid81129 00:25:09.323 Removing: /var/run/dpdk/spdk_pid81345 00:25:09.323 Removing: /var/run/dpdk/spdk_pid84698 00:25:09.323 Removing: /var/run/dpdk/spdk_pid84996 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85104 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85243 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85256 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85285 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85307 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85405 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85534 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85685 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85760 00:25:09.323 Removing: /var/run/dpdk/spdk_pid85951 00:25:09.323 Removing: /var/run/dpdk/spdk_pid86034 00:25:09.323 Removing: /var/run/dpdk/spdk_pid86127 00:25:09.323 Removing: /var/run/dpdk/spdk_pid86435 00:25:09.323 Removing: /var/run/dpdk/spdk_pid86783 00:25:09.323 Removing: /var/run/dpdk/spdk_pid86785 00:25:09.323 Removing: /var/run/dpdk/spdk_pid88952 00:25:09.323 Removing: /var/run/dpdk/spdk_pid88958 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89230 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89248 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89263 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89295 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89300 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89378 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89390 00:25:09.323 Removing: /var/run/dpdk/spdk_pid89494 00:25:09.581 Removing: /var/run/dpdk/spdk_pid89501 00:25:09.581 Removing: /var/run/dpdk/spdk_pid89604 00:25:09.581 Removing: /var/run/dpdk/spdk_pid89610 00:25:09.581 Removing: /var/run/dpdk/spdk_pid89999 00:25:09.581 Removing: /var/run/dpdk/spdk_pid90042 00:25:09.581 Removing: /var/run/dpdk/spdk_pid90151 00:25:09.581 Removing: /var/run/dpdk/spdk_pid90229 00:25:09.581 Removing: /var/run/dpdk/spdk_pid90527 00:25:09.581 Removing: /var/run/dpdk/spdk_pid90727 00:25:09.581 Removing: /var/run/dpdk/spdk_pid91102 00:25:09.581 Removing: /var/run/dpdk/spdk_pid91603 00:25:09.581 Removing: /var/run/dpdk/spdk_pid92410 00:25:09.581 Removing: /var/run/dpdk/spdk_pid92976 00:25:09.581 Removing: /var/run/dpdk/spdk_pid92982 00:25:09.581 Removing: /var/run/dpdk/spdk_pid94876 00:25:09.581 Removing: /var/run/dpdk/spdk_pid94938 00:25:09.581 Removing: /var/run/dpdk/spdk_pid94998 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95053 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95168 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95229 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95277 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95339 00:25:09.581 Removing: /var/run/dpdk/spdk_pid95666 00:25:09.581 Removing: /var/run/dpdk/spdk_pid96809 00:25:09.581 Removing: /var/run/dpdk/spdk_pid96949 00:25:09.581 Removing: /var/run/dpdk/spdk_pid97192 00:25:09.581 Removing: /var/run/dpdk/spdk_pid97734 00:25:09.582 Removing: /var/run/dpdk/spdk_pid97893 00:25:09.582 Removing: /var/run/dpdk/spdk_pid98049 00:25:09.582 Removing: /var/run/dpdk/spdk_pid98142 00:25:09.582 Removing: /var/run/dpdk/spdk_pid98303 00:25:09.582 Removing: /var/run/dpdk/spdk_pid98414 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99063 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99098 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99128 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99382 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99416 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99447 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99870 00:25:09.582 Removing: /var/run/dpdk/spdk_pid99887 00:25:09.582 Clean 00:25:09.582 22:05:15 -- common/autotest_common.sh@1447 -- # return 0 00:25:09.582 22:05:15 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:09.582 22:05:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.582 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:09.582 22:05:15 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:09.582 22:05:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.582 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:25:09.582 22:05:15 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:09.840 22:05:15 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:09.840 22:05:15 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:09.840 22:05:15 -- spdk/autotest.sh@391 -- # hash lcov 00:25:09.840 22:05:15 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:09.840 22:05:15 -- spdk/autotest.sh@393 -- # hostname 00:25:09.840 22:05:15 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:09.840 geninfo: WARNING: invalid characters removed from testname! 00:25:36.381 22:05:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:36.381 22:05:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:38.917 22:05:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.507 22:05:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:44.035 22:05:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.563 22:05:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.091 22:05:54 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:49.091 22:05:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.091 22:05:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:49.091 22:05:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.091 22:05:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.091 22:05:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.091 22:05:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.091 22:05:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.091 22:05:54 -- paths/export.sh@5 -- $ export PATH 00:25:49.091 22:05:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.091 22:05:54 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:49.091 22:05:54 -- common/autobuild_common.sh@440 -- $ date +%s 00:25:49.091 22:05:54 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721858754.XXXXXX 00:25:49.091 22:05:54 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721858754.gFvU5H 00:25:49.091 22:05:54 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:25:49.091 22:05:54 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:25:49.091 22:05:54 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:49.091 22:05:54 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:49.091 22:05:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:49.091 22:05:54 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:49.091 22:05:54 -- common/autobuild_common.sh@456 -- $ get_config_params 00:25:49.091 22:05:54 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:25:49.091 22:05:54 -- common/autotest_common.sh@10 -- $ set +x 00:25:49.091 22:05:54 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:49.091 22:05:54 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:25:49.091 22:05:54 -- pm/common@17 -- $ local monitor 00:25:49.091 22:05:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:49.091 22:05:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:49.091 22:05:54 -- pm/common@21 -- $ date +%s 00:25:49.091 22:05:54 -- pm/common@25 -- $ sleep 1 00:25:49.091 22:05:54 -- pm/common@21 -- $ date +%s 00:25:49.091 22:05:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721858754 00:25:49.091 22:05:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721858754 00:25:49.091 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721858754_collect-vmstat.pm.log 00:25:49.091 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721858754_collect-cpu-load.pm.log 00:25:50.046 22:05:55 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:25:50.046 22:05:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:50.046 22:05:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:50.046 22:05:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:50.046 22:05:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:50.046 22:05:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:50.046 22:05:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:50.046 22:05:55 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:50.046 22:05:55 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:50.046 22:05:55 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:50.046 22:05:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:50.046 22:05:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:50.046 22:05:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:50.046 22:05:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:50.046 22:05:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:50.046 22:05:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:50.046 22:05:55 -- pm/common@44 -- $ pid=102006 00:25:50.046 22:05:55 -- pm/common@50 -- $ kill -TERM 102006 00:25:50.046 22:05:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:50.046 22:05:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:50.046 22:05:55 -- pm/common@44 -- $ pid=102008 00:25:50.046 22:05:55 -- pm/common@50 -- $ kill -TERM 102008 00:25:50.046 + [[ -n 5840 ]] 00:25:50.046 + sudo kill 5840 00:25:50.312 [Pipeline] } 00:25:50.331 [Pipeline] // timeout 00:25:50.336 [Pipeline] } 00:25:50.350 [Pipeline] // stage 00:25:50.355 [Pipeline] } 00:25:50.369 [Pipeline] // catchError 00:25:50.378 [Pipeline] stage 00:25:50.380 [Pipeline] { (Stop VM) 00:25:50.394 [Pipeline] sh 00:25:50.670 + vagrant halt 00:25:53.950 ==> default: Halting domain... 00:25:59.226 [Pipeline] sh 00:25:59.506 + vagrant destroy -f 00:26:02.790 ==> default: Removing domain... 00:26:03.060 [Pipeline] sh 00:26:03.341 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/output 00:26:03.350 [Pipeline] } 00:26:03.370 [Pipeline] // stage 00:26:03.376 [Pipeline] } 00:26:03.395 [Pipeline] // dir 00:26:03.401 [Pipeline] } 00:26:03.419 [Pipeline] // wrap 00:26:03.426 [Pipeline] } 00:26:03.443 [Pipeline] // catchError 00:26:03.453 [Pipeline] stage 00:26:03.456 [Pipeline] { (Epilogue) 00:26:03.472 [Pipeline] sh 00:26:03.789 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:09.067 [Pipeline] catchError 00:26:09.069 [Pipeline] { 00:26:09.082 [Pipeline] sh 00:26:09.362 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:09.362 Artifacts sizes are good 00:26:09.370 [Pipeline] } 00:26:09.387 [Pipeline] // catchError 00:26:09.397 [Pipeline] archiveArtifacts 00:26:09.404 Archiving artifacts 00:26:09.579 [Pipeline] cleanWs 00:26:09.590 [WS-CLEANUP] Deleting project workspace... 00:26:09.590 [WS-CLEANUP] Deferred wipeout is used... 00:26:09.596 [WS-CLEANUP] done 00:26:09.598 [Pipeline] } 00:26:09.615 [Pipeline] // stage 00:26:09.621 [Pipeline] } 00:26:09.636 [Pipeline] // node 00:26:09.641 [Pipeline] End of Pipeline 00:26:09.679 Finished: SUCCESS